system_instruction
stringlengths 29
665
| user_request
stringlengths 15
889
| context_document
stringlengths 561
153k
| full_prompt
stringlengths 74
153k
|
---|---|---|---|
Base your response only on the document provided. List them in bullet point format. If you cannot answer using the context alone, say "I cannot determine the answer to that due to lack of context". | At week 8, what urological changes were observed in the experimental group of mice (injected with ketamine)? | **Changes to the bladder epithelial barrier are associated with ketamine-induced cystitis - Results **
Animals and ketamine administration A total of 60 8-week-old female C57BL/6 mice (weight, 19.08±1.29 g) were obtained from Renming Hospital of Wuhan University Laboratory Animal Center (Wuhan, China). Mice were maintained under a 12-h light/dark cycle at a constant temperature (21–22°C) and humidity (50%). Mice had free access to food and tap water prior to the experiments. Mice were randomly allocated into two groups, control group and ketamine treatment group, and the mice in each group were subsequently subdivided into three subgroups (4, 8 and 12 week groups; n=10 mice/subgroup). Mice received daily intraperitoneal injections of saline (control group) or 100 mg/kg ketamine (Gutian Fuxing Pharmaceutical Co., Ltd., Gutian, China) (ketamine treatment group) to model the effects of repeated ketamine abuse, which was previously described in a study by Meng et al (15).
Micturition behavior Micturition frequency was determined as previously described in a study by Gu et al (10). In brief, short-term micturition frequency of freely moving mice was observed at the end of 4, 8 and 12 weeks of treatment. At the end of the 4, 8 and 12 weeks, respectively, mice were placed in five separate square lattices containing a gridded filter paper pad, with each small grid containing a mouse. Filter paper was impregnated with saturated copper sulfate solution (CuSO4 5H2O) and dehydrated at 200°C for 1 h prior to use. When urine fell onto this filter paper, the anhydrous CuSO4 was rehydrated and turned blue. Subsequent to 2 h, the numbers of urine spots >0.2 cm in diameter were counted and recorded by five people independently with ultraviolet illumination.
Histopathological and immunohistochemical analysis Mice were sacrificed with an intraperitoneal injection of sodium pentobarbital (100 mg/kg, Sigma-Aldrich; Merck KGaA, Germany) and bladders were excised for analyses. For histopathlogical analysis, half of the bladder tissues were fixed in 4% phosphate-buffered paraformaldehyde overnight at room temperature, dehydrated in serial ethanol concentrations, cleared in xylene and embedded into paraffin wax. Serial paraffin sections of 5 µm in thickness were created, stained with haematoxylin and eosin and subsequently examined under a light microscope (Nikon Corp., Tokyo, Japan).
For immunohistochemical analysis, tissue sections were dewaxed and rehydrated with graded xylene and serial ethanol concentrations. Subsequently, immunohistochemical labeling for zonula occludens-1 (ZO-1) was performed. Following the blocking of nonspecific antibody activity using Tris-buffered saline (Sigma-Aldrich; Merck KGaA) containing 1% bovine serum albumin and 10% fetal calf serum at 37°C for 2 h, the tissue sections were incubated with primary antibody anti-ZO-1 (BSA-1543, 1:100 dilution; BioGenex, San Ramon, CA, USA) overnight at 4°C. Subsequently, hydrogen peroxide was used to eliminate the endogenous peroxidase activity at 37°C for 10 min. The distribution of ZO-1 marker was scored into two levels by two independent histologists: Normal and abnormal. If normal, the ZO-1 marker was distributed throughout the urothelium and more intensely under the umbrella cell layer with minimal expression in the cytoplasm. If abnormal, the distribution of the ZO-1 marker was patchy, absent, or expressed in the cytoplasm and not localized on cell surfaces.
Ultrastructure of bladder samples For ultrastructure analysis, half of the bladder tissue samples from each mouse were fixed in 2.5% glutaraldehyde buffered in 0.1 M phosphate buffer, post-fixed in buffered 1% osmium tetroxide, dehydrated using ascending grades of ethanol and dry acetone, embedded in epoxy resin, and finally left in a resin-polymerizing oven overnight at 65°C. The protocol for ultrastructure analysis was in accordance with the method published in the study by Jeong et al (16) with some modifications. Ultrathin sections of 70 nm in thickness were created, mounted on 200-mesh hexagonal copper grids and stained with lead citrate. The ultrastructural urothelium of the bladder samples was observed using a Hitachi H-600 transmission electron microscope (TEM; Hitachi, Ltd., Tokyo, Japan).
Statistical analysis Data are expressed as the mean ± standard error of the mean. Statistical analyses were performed using Prism v.5.0 software (GraphPad Software, Inc., La Jolla, CA, USA). Independent-samples t-tests were used to detect significant differences in micturition frequency between two groups. Fisher's exact test was used to measure any significant differences in ZO-1 expression. P<0.05 was considered to indicate a statistically significant difference.
Go to:
Results
Micturition behavior As exhibited in Fig. 1, the micturition frequency in the ketamine-treated and control groups were determined as 8.05+1.799 and 8.36+1.492 following 4 weeks of treatment, and there was no significant difference in micturition frequency between the two groups at this time point (P>0.05). However, following 8 weeks of treatment, the micturition frequency in the ketamine-treated group was determined as 11.90+3.348 and was significantly increased compared with that of the control group (8.50+1.581; P<0.01). Similar results were obtained for the micturition frequency in the ketamine-treated group (15.30+4.423) following 12 weeks of treatment, and this was significantly higher than that of the control group (8.50+1.581; P=0.001).
An external file that holds a picture, illustration, etc.
Object name is etm-14-04-2757-g00.jpg
Figure 1.
Micturition frequency of freely moving mice measured in a 2-h time period. Data are presented as the mean ± standard error of the mean. **P<0.05 and ***P<0.01.
Bladder pathology and immunohistochemistry The urinary bladders of the ketamine-treated mice displayed some pathology differences when compared with the controls. When compared with the control group (Fig. 2A), there was no significant inflammatory cell infiltration and arterial dilatation following 4 weeks of ketamine treatment (Fig. 2B); however, arterial dilatation and congestion were observed under the submucosal epithelium of the urinary bladder following 8 weeks of ketamine treatment (indicated by arrows; Fig. 2C). In addition to the above symptoms, inflammatory cells, predominantly lymphocytes and occasionally macrophages, had infiltrated into the submucosal epithelium of the urinary bladders of mice in the ketamine-treated group after 12 weeks of treatment (indicated by arrows; Fig. 2D).
An external file that holds a picture, illustration, etc.
Object name is etm-14-04-2757-g01.jpg
Figure 2.
Haematoxylin and eosin staining of midsagittal sections of murine bladders. Bladder sections from (A) the control group (magnification, ×400), (B) ketamine-treated mice following 4 weeks of treatment (magnification, ×400), (C) ketamine-treated mice following 8 weeks of treatment (magnification, ×200) and (D) ketamine-treated mice following 12 weeks of treatment (magnification, ×400).
ZO-1 was localized to the superficial umbrella cell layer at the apicolateral junction in the majority of control group samples (Fig. 3A); however, in the ketamine treatment groups, bladders exhibited a heterogeneous staining distribution, indicating that ZO-1 was distributed in the cytoplasm and was not absent or organized into tight junction structures (Fig. 3B). Additionally, the number of samples exhibiting abnormal ZO-1 distribution in the ketamine-treated group was increased compared with control group, with abnormal ZO-1 distribution in 70 vs. 0% (P=0.003) following 4 weeks, 70 vs. 10% (P=0.022) following 8 weeks and 90 vs. 10% (P=0.001) following 12 weeks of treatment in the ketamine-treated and control groups, respectively (Table I).
An external file that holds a picture, illustration, etc.
Object name is etm-14-04-2757-g02.jpg
Figure 3.
Representative immunohistochemical images of ZO-1 protein. For immunohistochemical analysis, tissue sections were incubated with primary antibody anti-ZO-1 overnight at 4°C. (A) In control mice, ZO-1 was localized to superficial umbrella cell layer at the interendothelial junctions in most samples. (B) In the ketamine group, ZO-1 located in the cytoplasm and not organized into tight junction structures, or absent. Magnification, ×200; scale bar, 100 µm. ZO-1, zonula occludens-1.
Table I.
Distribution of ZO-1 protein in each group.
Treatment duration (weeks) ZO-1 distribution Ketamine (%) Control (%) P-valuea
4 0.003
Normal 3 (30) 10 (100)
Abnormal 7 (70) 0 (0)
8 0.022
Normal 3 (30) 9 (90)
Abnormal 7 (70) 1 (10)
12 0.001
Normal 1 (10) 9 (90)
Abnormal 9 (90) 1 (10)
Open in a separate window
aFisher's exact test. ZO-1, zona occludens-1.
Ultrastructure of bladder samples Morphological ultrastructural alterations to the uroepithelium and lamina propria of bladder cells were determined using TEM. As demonstrated in Fig. 4A-D, the surface of the umbrella cells in the control group were suffused with raised microplicae (indicated by arrows in Fig. 4A and B) and the cytoplasm of these cells contained multiple subapical vesicles (indicated with asterisks in Fig. 4B). Marked changes in bladder morphology were observed in the ketamine-treated group compared with the control group. The surface of the umbrella cells appeared more flattened following 4 (Fig. 4E-H) and 8 weeks (Fig. 4I-L), and even diminished following 12 weeks (Fig. 4M-P), of treatment in the ketamine-treated group compared with the control group. Thin tight junction complexes were frequently observed between the umbrella cells following 4 weeks of treatment in the ketamine-treated group (Fig. 4G); however, these distinct junction complexes were not observed following 8 and 12 weeks of ketamine treatment (Fig. 4K and O). In addition, the vascular endothelial cells exhibited cell body shrinkage, increased cytoplasm density and chromatin condensation following 8 weeks (Fig. 4L) and layer denudation following 12 weeks (Fig. 4P) of ketamine treatment.
| <Article>
==================
**Changes to the bladder epithelial barrier are associated with ketamine-induced cystitis - Results **
Animals and ketamine administration A total of 60 8-week-old female C57BL/6 mice (weight, 19.08±1.29 g) were obtained from Renming Hospital of Wuhan University Laboratory Animal Center (Wuhan, China). Mice were maintained under a 12-h light/dark cycle at a constant temperature (21–22°C) and humidity (50%). Mice had free access to food and tap water prior to the experiments. Mice were randomly allocated into two groups, control group and ketamine treatment group, and the mice in each group were subsequently subdivided into three subgroups (4, 8 and 12 week groups; n=10 mice/subgroup). Mice received daily intraperitoneal injections of saline (control group) or 100 mg/kg ketamine (Gutian Fuxing Pharmaceutical Co., Ltd., Gutian, China) (ketamine treatment group) to model the effects of repeated ketamine abuse, which was previously described in a study by Meng et al (15).
Micturition behavior Micturition frequency was determined as previously described in a study by Gu et al (10). In brief, short-term micturition frequency of freely moving mice was observed at the end of 4, 8 and 12 weeks of treatment. At the end of the 4, 8 and 12 weeks, respectively, mice were placed in five separate square lattices containing a gridded filter paper pad, with each small grid containing a mouse. Filter paper was impregnated with saturated copper sulfate solution (CuSO4 5H2O) and dehydrated at 200°C for 1 h prior to use. When urine fell onto this filter paper, the anhydrous CuSO4 was rehydrated and turned blue. Subsequent to 2 h, the numbers of urine spots >0.2 cm in diameter were counted and recorded by five people independently with ultraviolet illumination.
Histopathological and immunohistochemical analysis Mice were sacrificed with an intraperitoneal injection of sodium pentobarbital (100 mg/kg, Sigma-Aldrich; Merck KGaA, Germany) and bladders were excised for analyses. For histopathlogical analysis, half of the bladder tissues were fixed in 4% phosphate-buffered paraformaldehyde overnight at room temperature, dehydrated in serial ethanol concentrations, cleared in xylene and embedded into paraffin wax. Serial paraffin sections of 5 µm in thickness were created, stained with haematoxylin and eosin and subsequently examined under a light microscope (Nikon Corp., Tokyo, Japan).
For immunohistochemical analysis, tissue sections were dewaxed and rehydrated with graded xylene and serial ethanol concentrations. Subsequently, immunohistochemical labeling for zonula occludens-1 (ZO-1) was performed. Following the blocking of nonspecific antibody activity using Tris-buffered saline (Sigma-Aldrich; Merck KGaA) containing 1% bovine serum albumin and 10% fetal calf serum at 37°C for 2 h, the tissue sections were incubated with primary antibody anti-ZO-1 (BSA-1543, 1:100 dilution; BioGenex, San Ramon, CA, USA) overnight at 4°C. Subsequently, hydrogen peroxide was used to eliminate the endogenous peroxidase activity at 37°C for 10 min. The distribution of ZO-1 marker was scored into two levels by two independent histologists: Normal and abnormal. If normal, the ZO-1 marker was distributed throughout the urothelium and more intensely under the umbrella cell layer with minimal expression in the cytoplasm. If abnormal, the distribution of the ZO-1 marker was patchy, absent, or expressed in the cytoplasm and not localized on cell surfaces.
Ultrastructure of bladder samples For ultrastructure analysis, half of the bladder tissue samples from each mouse were fixed in 2.5% glutaraldehyde buffered in 0.1 M phosphate buffer, post-fixed in buffered 1% osmium tetroxide, dehydrated using ascending grades of ethanol and dry acetone, embedded in epoxy resin, and finally left in a resin-polymerizing oven overnight at 65°C. The protocol for ultrastructure analysis was in accordance with the method published in the study by Jeong et al (16) with some modifications. Ultrathin sections of 70 nm in thickness were created, mounted on 200-mesh hexagonal copper grids and stained with lead citrate. The ultrastructural urothelium of the bladder samples was observed using a Hitachi H-600 transmission electron microscope (TEM; Hitachi, Ltd., Tokyo, Japan).
Statistical analysis Data are expressed as the mean ± standard error of the mean. Statistical analyses were performed using Prism v.5.0 software (GraphPad Software, Inc., La Jolla, CA, USA). Independent-samples t-tests were used to detect significant differences in micturition frequency between two groups. Fisher's exact test was used to measure any significant differences in ZO-1 expression. P<0.05 was considered to indicate a statistically significant difference.
Go to:
Results
Micturition behavior As exhibited in Fig. 1, the micturition frequency in the ketamine-treated and control groups were determined as 8.05+1.799 and 8.36+1.492 following 4 weeks of treatment, and there was no significant difference in micturition frequency between the two groups at this time point (P>0.05). However, following 8 weeks of treatment, the micturition frequency in the ketamine-treated group was determined as 11.90+3.348 and was significantly increased compared with that of the control group (8.50+1.581; P<0.01). Similar results were obtained for the micturition frequency in the ketamine-treated group (15.30+4.423) following 12 weeks of treatment, and this was significantly higher than that of the control group (8.50+1.581; P=0.001).
An external file that holds a picture, illustration, etc.
Object name is etm-14-04-2757-g00.jpg
Figure 1.
Micturition frequency of freely moving mice measured in a 2-h time period. Data are presented as the mean ± standard error of the mean. **P<0.05 and ***P<0.01.
Bladder pathology and immunohistochemistry The urinary bladders of the ketamine-treated mice displayed some pathology differences when compared with the controls. When compared with the control group (Fig. 2A), there was no significant inflammatory cell infiltration and arterial dilatation following 4 weeks of ketamine treatment (Fig. 2B); however, arterial dilatation and congestion were observed under the submucosal epithelium of the urinary bladder following 8 weeks of ketamine treatment (indicated by arrows; Fig. 2C). In addition to the above symptoms, inflammatory cells, predominantly lymphocytes and occasionally macrophages, had infiltrated into the submucosal epithelium of the urinary bladders of mice in the ketamine-treated group after 12 weeks of treatment (indicated by arrows; Fig. 2D).
An external file that holds a picture, illustration, etc.
Object name is etm-14-04-2757-g01.jpg
Figure 2.
Haematoxylin and eosin staining of midsagittal sections of murine bladders. Bladder sections from (A) the control group (magnification, ×400), (B) ketamine-treated mice following 4 weeks of treatment (magnification, ×400), (C) ketamine-treated mice following 8 weeks of treatment (magnification, ×200) and (D) ketamine-treated mice following 12 weeks of treatment (magnification, ×400).
ZO-1 was localized to the superficial umbrella cell layer at the apicolateral junction in the majority of control group samples (Fig. 3A); however, in the ketamine treatment groups, bladders exhibited a heterogeneous staining distribution, indicating that ZO-1 was distributed in the cytoplasm and was not absent or organized into tight junction structures (Fig. 3B). Additionally, the number of samples exhibiting abnormal ZO-1 distribution in the ketamine-treated group was increased compared with control group, with abnormal ZO-1 distribution in 70 vs. 0% (P=0.003) following 4 weeks, 70 vs. 10% (P=0.022) following 8 weeks and 90 vs. 10% (P=0.001) following 12 weeks of treatment in the ketamine-treated and control groups, respectively (Table I).
An external file that holds a picture, illustration, etc.
Object name is etm-14-04-2757-g02.jpg
Figure 3.
Representative immunohistochemical images of ZO-1 protein. For immunohistochemical analysis, tissue sections were incubated with primary antibody anti-ZO-1 overnight at 4°C. (A) In control mice, ZO-1 was localized to superficial umbrella cell layer at the interendothelial junctions in most samples. (B) In the ketamine group, ZO-1 located in the cytoplasm and not organized into tight junction structures, or absent. Magnification, ×200; scale bar, 100 µm. ZO-1, zonula occludens-1.
Table I.
Distribution of ZO-1 protein in each group.
Treatment duration (weeks) ZO-1 distribution Ketamine (%) Control (%) P-valuea
4 0.003
Normal 3 (30) 10 (100)
Abnormal 7 (70) 0 (0)
8 0.022
Normal 3 (30) 9 (90)
Abnormal 7 (70) 1 (10)
12 0.001
Normal 1 (10) 9 (90)
Abnormal 9 (90) 1 (10)
Open in a separate window
aFisher's exact test. ZO-1, zona occludens-1.
Ultrastructure of bladder samples Morphological ultrastructural alterations to the uroepithelium and lamina propria of bladder cells were determined using TEM. As demonstrated in Fig. 4A-D, the surface of the umbrella cells in the control group were suffused with raised microplicae (indicated by arrows in Fig. 4A and B) and the cytoplasm of these cells contained multiple subapical vesicles (indicated with asterisks in Fig. 4B). Marked changes in bladder morphology were observed in the ketamine-treated group compared with the control group. The surface of the umbrella cells appeared more flattened following 4 (Fig. 4E-H) and 8 weeks (Fig. 4I-L), and even diminished following 12 weeks (Fig. 4M-P), of treatment in the ketamine-treated group compared with the control group. Thin tight junction complexes were frequently observed between the umbrella cells following 4 weeks of treatment in the ketamine-treated group (Fig. 4G); however, these distinct junction complexes were not observed following 8 and 12 weeks of ketamine treatment (Fig. 4K and O). In addition, the vascular endothelial cells exhibited cell body shrinkage, increased cytoplasm density and chromatin condensation following 8 weeks (Fig. 4L) and layer denudation following 12 weeks (Fig. 4P) of ketamine treatment.
<Query>
==================
At week 8, what urological changes were observed in the experimental group of mice (injected with ketamine)?
<Instructions>
==================
Base your response only on the document provided. List them in bullet point format. If you cannot answer using the context alone, say "I cannot determine the answer to that due to lack of context". |
You must respond to the prompt using information in the context block. Do not use information from other sources. Always format lists using bullet points. | List the potential challenges that each remedy faces in the monopolization case against Google | On August 5, 2024, the U.S. District Court for the District of Columbia held that Google unlawfully
monopolizes the markets for general search services and general search text ads through a series of
exclusive contracts with browser developers, mobile device manufacturers, and wireless carriers. The
opinion emerges from a lawsuit filed by the Department of Justice (DOJ) and a group of state attorneys
general (AGs) in 2020. The DOJ lawsuit was later consolidated with a separate case filed by another
group of state AGs that largely adopted the allegations in the DOJ’s complaint. This Legal Sidebar
provides an overview of the court’s decision and issues that may arise during the remedies phase of the
case.
The Challenged Agreements
The DOJ’s lawsuit targets distribution contracts that allegedly allow Google to foreclose (i.e., deny access
to) significant shares of the markets for general search and general search text ads. Some of the contracts
involve browser developers like Apple (the developer of the Safari browser) and Mozilla (the developer
of the Firefox browser). Under these agreements, Google pays developers a share of its search ads
revenue in exchange for the developers preloading Google as the default search engine for their browsers.
Other contracts involve manufacturers of Android mobile devices, such as Motorola and Samsung. These
agreements allow manufacturers to preinstall certain proprietary Google apps, like the Google Play Store,
on their devices. In exchange for that access, manufacturers must also preload other Google apps,
including Google Search and the Chrome browser, which defaults to Google Search.
Other agreements involve revenue-sharing arrangements with device manufacturers and wireless carriers.
Under these contracts, Google pays manufacturers and carriers a share of its revenue from search ads in
exchange for the preinstallation of Google Search at certain access points. Some of these agreements also
prohibit Google’s counterparties from preinstalling or promoting alternative general search engines.
The Court’s Decision
The DOJ’s lawsuit contends that Google’s distribution agreements constitute unlawful monopolization
under Section 2 of the Sherman Act because they foreclose substantial shares of the relevant markets and
deprive rivals of the scale needed to improve their search engines. The monopolization offense has two
elements: (1) the possession of monopoly power, and (2) exclusionary conduct. The following subsections
discuss the district court’s analysis of both elements.
Monopoly Power
The Supreme Court has characterized “monopoly power” as “the power to control prices or exclude
competition.” More specifically, monopoly power involves a substantial degree of market power—the
ability to raise prices above costs without sacrificing profits. Plaintiffs can establish monopoly power via
direct proof that a firm has in fact profitably raised prices substantially above competitive levels or
indirect structural evidence that supports an inference of monopoly power. Under the more common
indirect approach, plaintiffs can prove monopoly power by showing that the defendant possesses a
dominant market share that is protected by entry barriers.
To calculate market shares, plaintiffs must define a relevant market in which competition occurs. The
scope of the relevant market is determined by the range of reasonable substitutes for the good or service
in question. In evaluating substitutability, courts rely on both quantitative evidence and a series of
qualitative factors from Brown Shoe Co. v. United States, a 1962 Supreme Court decision. Because of its
centrality in establishing market power and monopoly power, market definition is often a dispositive issue
in antitrust cases.
Applying this legal framework, the district court concluded that Google has monopoly power in two
markets: general search services and general search text ads. The court relied on several of the Brown
Shoe factors in defining these markets, rejecting Google’s arguments that the relevant markets are
broader. Instead of a market for general search, Google had posited a larger market for “query responses”
that included vertical search engines (e.g., Expedia, Yelp), social media platforms, and other websites.
The court declined to adopt Google’s proposed market, reasoning that vertical search engines do not
respond to the range of queries answered by general search engines, even if they can serve as substitutes
for discrete purposes. The court concluded that Google has monopoly power in the narrower market for
general search based on the firm’s market share of over 89% and significant entry barriers like high
capital costs, Google’s control of key distribution channels, brand recognition, and scale.
In analyzing the scope of the relevant advertiser-side market, the court recognized markets for both search
advertising and general search text ads, rejecting Google’s argument for a broader digital advertising
market. Among other things, the court reasoned that search ads (which are displayed in response to
specific queries) are not reasonably interchangeable with other types of digital ads because search ads
allow advertisers to target customers with greater precision. The court ultimately determined that Google
lacks monopoly power in the market for search advertising—which includes search ads on vertical search
engines and social media platforms—because of an absence of entry barriers. However, the court found
that Google is a monopolist in the narrower market for general search text ads based on the firm’s
dominant market share and the entry barriers discussed above.
The district court’s decision finds that Google is liable for violating the Sherman Act, but does not impose
remedies for those violations. Earlier in the litigation, the court granted the parties’ joint request to
bifurcate the liability and remedies phases of the case. The court has ordered the parties to propose a
schedule for remedies proceedings by September 4, 2024. Google has said that it plans to appeal the
court’s liability decision, but it is unclear whether the appeal will proceed before or after the district court
imposes remedies.
Congressional Research Service 5
During the remedies phase, the district court will have several options. The narrowest would involve an
injunction prohibiting Google’s exclusive contracts. An injunction barring exclusivity was the remedy in
United States v. Dentsply, a monopolization case involving exclusive dealing that was resolved in 2006.
The most cited antitrust treatise also suggests that, in cases involving a single category of anticompetitive
conduct like exclusive dealing, a targeted injunction may be the most appropriate remedy. This type of
relief could allow distributors to negotiate default arrangements with other search engines, retain Google
as their defaults without receiving payments conditioned on exclusivity, or offer consumers a “choice
screen” directing them to select their own default search engine. The court may also consider ordering
Google to adopt a choice screen on Android devices, but it likely lacks the authority to require the
relevant browser developers to do so because they are not parties to the litigation.
Another possibility is a broader injunction requiring Google to share search data with rivals. This type of
mandatory pooling could facilitate the emergence of rival search engines, but might also create free-rider
problems that disincentivize investments in improving search quality. Such an arrangement might also
prove difficult for the court to administer.
| You must respond to the prompt using information in the context block. Do not use information from other sources. Always format lists using bullet points.
List the potential challenges that each remedy faces in the monopolization case against Google
On August 5, 2024, the U.S. District Court for the District of Columbia held that Google unlawfully
monopolizes the markets for general search services and general search text ads through a series of
exclusive contracts with browser developers, mobile device manufacturers, and wireless carriers. The
opinion emerges from a lawsuit filed by the Department of Justice (DOJ) and a group of state attorneys
general (AGs) in 2020. The DOJ lawsuit was later consolidated with a separate case filed by another
group of state AGs that largely adopted the allegations in the DOJ’s complaint. This Legal Sidebar
provides an overview of the court’s decision and issues that may arise during the remedies phase of the
case.
The Challenged Agreements
The DOJ’s lawsuit targets distribution contracts that allegedly allow Google to foreclose (i.e., deny access
to) significant shares of the markets for general search and general search text ads. Some of the contracts
involve browser developers like Apple (the developer of the Safari browser) and Mozilla (the developer
of the Firefox browser). Under these agreements, Google pays developers a share of its search ads
revenue in exchange for the developers preloading Google as the default search engine for their browsers.
Other contracts involve manufacturers of Android mobile devices, such as Motorola and Samsung. These
agreements allow manufacturers to preinstall certain proprietary Google apps, like the Google Play Store,
on their devices. In exchange for that access, manufacturers must also preload other Google apps,
including Google Search and the Chrome browser, which defaults to Google Search.
Other agreements involve revenue-sharing arrangements with device manufacturers and wireless carriers.
Under these contracts, Google pays manufacturers and carriers a share of its revenue from search ads in
exchange for the preinstallation of Google Search at certain access points. Some of these agreements also
prohibit Google’s counterparties from preinstalling or promoting alternative general search engines.
The Court’s Decision
The DOJ’s lawsuit contends that Google’s distribution agreements constitute unlawful monopolization
under Section 2 of the Sherman Act because they foreclose substantial shares of the relevant markets and
deprive rivals of the scale needed to improve their search engines. The monopolization offense has two
elements: (1) the possession of monopoly power, and (2) exclusionary conduct. The following subsections
discuss the district court’s analysis of both elements.
Monopoly Power
The Supreme Court has characterized “monopoly power” as “the power to control prices or exclude
competition.” More specifically, monopoly power involves a substantial degree of market power—the
ability to raise prices above costs without sacrificing profits. Plaintiffs can establish monopoly power via
direct proof that a firm has in fact profitably raised prices substantially above competitive levels or
indirect structural evidence that supports an inference of monopoly power. Under the more common
indirect approach, plaintiffs can prove monopoly power by showing that the defendant possesses a
dominant market share that is protected by entry barriers.
To calculate market shares, plaintiffs must define a relevant market in which competition occurs. The
scope of the relevant market is determined by the range of reasonable substitutes for the good or service
in question. In evaluating substitutability, courts rely on both quantitative evidence and a series of
qualitative factors from Brown Shoe Co. v. United States, a 1962 Supreme Court decision. Because of its
centrality in establishing market power and monopoly power, market definition is often a dispositive issue
in antitrust cases.
Applying this legal framework, the district court concluded that Google has monopoly power in two
markets: general search services and general search text ads. The court relied on several of the Brown
Shoe factors in defining these markets, rejecting Google’s arguments that the relevant markets are
broader. Instead of a market for general search, Google had posited a larger market for “query responses”
that included vertical search engines (e.g., Expedia, Yelp), social media platforms, and other websites.
The court declined to adopt Google’s proposed market, reasoning that vertical search engines do not
respond to the range of queries answered by general search engines, even if they can serve as substitutes
for discrete purposes. The court concluded that Google has monopoly power in the narrower market for
general search based on the firm’s market share of over 89% and significant entry barriers like high
capital costs, Google’s control of key distribution channels, brand recognition, and scale.
In analyzing the scope of the relevant advertiser-side market, the court recognized markets for both search
advertising and general search text ads, rejecting Google’s argument for a broader digital advertising
market. Among other things, the court reasoned that search ads (which are displayed in response to
specific queries) are not reasonably interchangeable with other types of digital ads because search ads
allow advertisers to target customers with greater precision. The court ultimately determined that Google
lacks monopoly power in the market for search advertising—which includes search ads on vertical search
engines and social media platforms—because of an absence of entry barriers. However, the court found
that Google is a monopolist in the narrower market for general search text ads based on the firm’s
dominant market share and the entry barriers discussed above.
The district court’s decision finds that Google is liable for violating the Sherman Act, but does not impose
remedies for those violations. Earlier in the litigation, the court granted the parties’ joint request to
bifurcate the liability and remedies phases of the case. The court has ordered the parties to propose a
schedule for remedies proceedings by September 4, 2024. Google has said that it plans to appeal the
court’s liability decision, but it is unclear whether the appeal will proceed before or after the district court
imposes remedies.
Congressional Research Service 5
During the remedies phase, the district court will have several options. The narrowest would involve an
injunction prohibiting Google’s exclusive contracts. An injunction barring exclusivity was the remedy in
United States v. Dentsply, a monopolization case involving exclusive dealing that was resolved in 2006.
The most cited antitrust treatise also suggests that, in cases involving a single category of anticompetitive
conduct like exclusive dealing, a targeted injunction may be the most appropriate remedy. This type of
relief could allow distributors to negotiate default arrangements with other search engines, retain Google
as their defaults without receiving payments conditioned on exclusivity, or offer consumers a “choice
screen” directing them to select their own default search engine. The court may also consider ordering
Google to adopt a choice screen on Android devices, but it likely lacks the authority to require the
relevant browser developers to do so because they are not parties to the litigation.
Another possibility is a broader injunction requiring Google to share search data with rivals. This type of
mandatory pooling could facilitate the emergence of rival search engines, but might also create free-rider
problems that disincentivize investments in improving search quality. Such an arrangement might also
prove difficult for the court to administer.
|
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | As a software architect, I'm considering microservices for a large-scale system. Can you explain the differences between microservices and monolithic architectures in terms of scalability, deployment and fault isolation? Also, what challenges arise in microservices regarding data consistency and inter-service communication ? Please provide your response in less than 200 words. | Microservices Architecture: A Paradigm Shift in Distributed Systems
The evolution of software architecture has led to the emergence of microservices as a dominant paradigm in distributed systems design. This architectural style represents a significant departure from traditional monolithic structures, offering enhanced scalability, flexibility, and resilience. However, it also introduces new challenges that must be carefully considered during implementation.
Microservices vs. Monolithic Architectures
Scalability:
Monolithic architectures, characterized by their single-tiered software application structure, often face scalability issues as the codebase grows. Scaling requires replication of the entire application, leading to inefficient resource utilization. In contrast, microservices allow for independent scaling of individual components. This granular scalability enables organizations to allocate resources more efficiently, scaling only the services that require additional capacity.
Deployment:
Deployment in monolithic systems typically involves updating the entire application, even for minor changes. This process can be time-consuming and risky, potentially affecting the entire system's stability. Microservices, however, facilitate continuous deployment and integration (CI/CD) practices. Each service can be deployed independently, reducing deployment complexity and allowing for more frequent updates with minimal system-wide impact.
Fault Isolation:
In monolithic architectures, a fault in any module can potentially bring down the entire system. Microservices architecture inherently provides better fault isolation. Since services are independent, a failure in one service does not necessarily affect the others, enhancing overall system resilience.
Challenges in Adopting Microservices
While microservices offer numerous advantages, their adoption is not without challenges:
Data Consistency:
Maintaining data consistency across distributed services is a significant challenge. The distributed nature of microservices often necessitates the implementation of eventual consistency models, which can complicate application logic and user experience. Techniques such as event sourcing and CQRS (Command Query Responsibility Segregation) are often employed to address these issues, but they introduce their own complexities.
Inter-service Communication:
As the number of services grows, the complexity of inter-service communication increases exponentially. This can lead to network latency issues and potential points of failure. Implementing robust service discovery mechanisms, efficient load balancing, and resilient communication patterns (e.g., circuit breakers, retries) becomes crucial.
Distributed Transactions:
Ensuring transactional integrity across multiple services is challenging. The lack of a centralized database often requires the implementation of complex distributed transaction patterns such as the Saga pattern or two-phase commit protocols.
Operational Complexity:
The distributed nature of microservices increases operational complexity. Monitoring, logging, and debugging become more challenging as the system is spread across multiple services. Implementing comprehensive observability solutions becomes essential for effective system management.
Service Boundaries:
Defining appropriate service boundaries is crucial for the success of a microservices architecture. Incorrectly defined boundaries can lead to tightly coupled services, negating many of the benefits of this architectural style.
Implementation Considerations
When implementing a microservices architecture, several key factors must be considered:
API Gateway:
An API gateway acts as a single entry point for all client requests, routing them to appropriate services. It can also handle cross-cutting concerns such as authentication, rate limiting, and request/response transformation.
Service Discovery:
Dynamic service discovery mechanisms are essential in microservices environments where service instances may come and go frequently. Tools like Consul or Eureka are often used for this purpose.
Containerization and Orchestration:
Container technologies like Docker, coupled with orchestration platforms such as Kubernetes, have become de facto standards in microservices deployments. They provide consistency across development and production environments and facilitate efficient resource utilization.
Event-Driven Architecture:
Adopting an event-driven approach can help decouple services and improve system responsiveness. Technologies like Apache Kafka or RabbitMQ are commonly used for implementing event-driven architectures in microservices.
Polyglot Persistence:
Microservices allow for the use of different data storage technologies for different services based on their specific requirements. This polyglot persistence approach can optimize data storage and retrieval for each service.
Conclusion
Microservices architecture offers significant advantages in terms of scalability, deployment flexibility, and fault isolation. However, it also introduces complexities in data consistency management, inter-service communication, and operational overhead. Successful implementation requires careful consideration of these challenges and the adoption of appropriate patterns and technologies to address them. As with any architectural decision, the choice to adopt microservices should be based on a thorough analysis of the specific requirements and constraints of the system being developed. | [question]
As a software architect, I'm considering microservices for a large-scale system. Can you explain the differences between microservices and monolithic architectures in terms of scalability, deployment and fault isolation? Also, what challenges arise in microservices regarding data consistency and inter-service communication ? Please provide your response in less than 200 words.
=====================
[text]
Microservices Architecture: A Paradigm Shift in Distributed Systems
The evolution of software architecture has led to the emergence of microservices as a dominant paradigm in distributed systems design. This architectural style represents a significant departure from traditional monolithic structures, offering enhanced scalability, flexibility, and resilience. However, it also introduces new challenges that must be carefully considered during implementation.
Microservices vs. Monolithic Architectures
Scalability:
Monolithic architectures, characterized by their single-tiered software application structure, often face scalability issues as the codebase grows. Scaling requires replication of the entire application, leading to inefficient resource utilization. In contrast, microservices allow for independent scaling of individual components. This granular scalability enables organizations to allocate resources more efficiently, scaling only the services that require additional capacity.
Deployment:
Deployment in monolithic systems typically involves updating the entire application, even for minor changes. This process can be time-consuming and risky, potentially affecting the entire system's stability. Microservices, however, facilitate continuous deployment and integration (CI/CD) practices. Each service can be deployed independently, reducing deployment complexity and allowing for more frequent updates with minimal system-wide impact.
Fault Isolation:
In monolithic architectures, a fault in any module can potentially bring down the entire system. Microservices architecture inherently provides better fault isolation. Since services are independent, a failure in one service does not necessarily affect the others, enhancing overall system resilience.
Challenges in Adopting Microservices
While microservices offer numerous advantages, their adoption is not without challenges:
Data Consistency:
Maintaining data consistency across distributed services is a significant challenge. The distributed nature of microservices often necessitates the implementation of eventual consistency models, which can complicate application logic and user experience. Techniques such as event sourcing and CQRS (Command Query Responsibility Segregation) are often employed to address these issues, but they introduce their own complexities.
Inter-service Communication:
As the number of services grows, the complexity of inter-service communication increases exponentially. This can lead to network latency issues and potential points of failure. Implementing robust service discovery mechanisms, efficient load balancing, and resilient communication patterns (e.g., circuit breakers, retries) becomes crucial.
Distributed Transactions:
Ensuring transactional integrity across multiple services is challenging. The lack of a centralized database often requires the implementation of complex distributed transaction patterns such as the Saga pattern or two-phase commit protocols.
Operational Complexity:
The distributed nature of microservices increases operational complexity. Monitoring, logging, and debugging become more challenging as the system is spread across multiple services. Implementing comprehensive observability solutions becomes essential for effective system management.
Service Boundaries:
Defining appropriate service boundaries is crucial for the success of a microservices architecture. Incorrectly defined boundaries can lead to tightly coupled services, negating many of the benefits of this architectural style.
Implementation Considerations
When implementing a microservices architecture, several key factors must be considered:
API Gateway:
An API gateway acts as a single entry point for all client requests, routing them to appropriate services. It can also handle cross-cutting concerns such as authentication, rate limiting, and request/response transformation.
Service Discovery:
Dynamic service discovery mechanisms are essential in microservices environments where service instances may come and go frequently. Tools like Consul or Eureka are often used for this purpose.
Containerization and Orchestration:
Container technologies like Docker, coupled with orchestration platforms such as Kubernetes, have become de facto standards in microservices deployments. They provide consistency across development and production environments and facilitate efficient resource utilization.
Event-Driven Architecture:
Adopting an event-driven approach can help decouple services and improve system responsiveness. Technologies like Apache Kafka or RabbitMQ are commonly used for implementing event-driven architectures in microservices.
Polyglot Persistence:
Microservices allow for the use of different data storage technologies for different services based on their specific requirements. This polyglot persistence approach can optimize data storage and retrieval for each service.
Conclusion
Microservices architecture offers significant advantages in terms of scalability, deployment flexibility, and fault isolation. However, it also introduces complexities in data consistency management, inter-service communication, and operational overhead. Successful implementation requires careful consideration of these challenges and the adoption of appropriate patterns and technologies to address them. As with any architectural decision, the choice to adopt microservices should be based on a thorough analysis of the specific requirements and constraints of the system being developed.
https://azure.microsoft.com/en-us/blog/microservices-architecture-on-azure-kubernetes-service/
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | My cousin and I are considering getting vaccinations for the first time shes 57 and I'm 50 but currently pregnant. Considering our medical history and age, I need to know if there are any risks for us. List 3 best reasons to get or not to get the shot with bullet points. | Recombinant zoster (shingles) vaccine can prevent shingles.
Shingles (also called herpes zoster, or just zoster) is a painful skin rash, usually with blisters. In addition to the rash, shingles can cause fever, headache, chills, or upset stomach. Rarely, shingles can lead to complications such as pneumonia, hearing problems, blindness, brain inflammation (encephalitis), or death.
The risk of shingles increases with age. The most common complication of shingles is long-term nerve pain called postherpetic neuralgia (PHN). PHN occurs in the areas where the shingles rash was and can last for months or years after the rash goes away. The pain from PHN can be severe and debilitating.
The risk of PHN increases with age. An older adult with shingles is more likely to develop PHN and have longer lasting and more severe pain than a younger person.
People with weakened immune systems also have a higher risk of getting shingles and complications from the disease.
Shingles is caused by varicella-zoster virus, the same virus that causes chickenpox. After you have chickenpox, the virus stays in your body and can cause shingles later in life. Shingles cannot be passed from one person to another, but the virus that causes shingles can spread and cause chickenpox in someone who has never had chickenpox or has never received chickenpox vaccine.
Recombinant shingles vaccine
Recombinant shingles vaccine provides strong protection against shingles. By preventing shingles, recombinant shingles vaccine also protects against PHN and other complications.
Recombinant shingles vaccine is recommended for:
Adults 50 years and older
Adults 19 years and older who have a weakened immune system because of disease or treatments
Shingles vaccine is given as a two-dose series. For most people, the second dose should be given 2 to 6 months after the first dose. Some people who have or will have a weakened immune system can get the second dose 1 to 2 months after the first dose. Ask your health care provider for guidance.
People who have had shingles in the past and people who have received varicella (chickenpox) vaccine are recommended to get recombinant shingles vaccine. The vaccine is also recommended for people who have already gotten another type of shingles vaccine, the live shingles vaccine. There is no live virus in recombinant shingles vaccine.
Shingles vaccine may be given at the same time as other vaccines.
Talk with your health care provider
Tell your vaccination provider if the person getting the vaccine:
Has had an allergic reaction after a previous dose of recombinant shingles vaccine, or has any severe, life-threatening allergies
Is currently experiencing an episode of shingles
Is pregnant
In some cases, your health care provider may decide to postpone shingles vaccination until a future visit.
People with minor illnesses, such as a cold, may be vaccinated. People who are moderately or severely ill should usually wait until they recover before getting recombinant shingles vaccine.
Your health care provider can give you more information.
Risks of a vaccine reaction
A sore arm with mild or moderate pain is very common after recombinant shingles vaccine. Redness and swelling can also happen at the site of the injection.
Tiredness, muscle pain, headache, shivering, fever, stomach pain, and nausea are common after recombinant shingles vaccine.
These side effects may temporarily prevent a vaccinated person from doing regular activities. Symptoms usually go away on their own in 2 to 3 days. You should still get the second dose of recombinant shingles vaccine even if you had one of these reactions after the first dose.
Guillain-Barré syndrome (GBS), a serious nervous system disorder, has been reported very rarely after recombinant zoster vaccine.
People sometimes faint after medical procedures, including vaccination. Tell your provider if you feel dizzy or have vision changes or ringing in the ears.
As with any medicine, there is a very remote chance of a vaccine causing a severe allergic reaction, other serious injury, or death.
What if there is a serious problem?
An allergic reaction could occur after the vaccinated person leaves the clinic. If you see signs of a severe allergic reaction (hives, swelling of the face and throat, difficulty breathing, a fast heartbeat, dizziness, or weakness), call 9-1-1 and get the person to the nearest hospital. | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
My cousin and I are considering getting vaccinations for the first time shes 57 and I'm 50 but currently pregnant. Considering our medical history and age, I need to know if there are any risks for us. List 3 best reasons to get or not to get the shot with bullet points.
{passage 0}
==========
Recombinant zoster (shingles) vaccine can prevent shingles.
Shingles (also called herpes zoster, or just zoster) is a painful skin rash, usually with blisters. In addition to the rash, shingles can cause fever, headache, chills, or upset stomach. Rarely, shingles can lead to complications such as pneumonia, hearing problems, blindness, brain inflammation (encephalitis), or death.
The risk of shingles increases with age. The most common complication of shingles is long-term nerve pain called postherpetic neuralgia (PHN). PHN occurs in the areas where the shingles rash was and can last for months or years after the rash goes away. The pain from PHN can be severe and debilitating.
The risk of PHN increases with age. An older adult with shingles is more likely to develop PHN and have longer lasting and more severe pain than a younger person.
People with weakened immune systems also have a higher risk of getting shingles and complications from the disease.
Shingles is caused by varicella-zoster virus, the same virus that causes chickenpox. After you have chickenpox, the virus stays in your body and can cause shingles later in life. Shingles cannot be passed from one person to another, but the virus that causes shingles can spread and cause chickenpox in someone who has never had chickenpox or has never received chickenpox vaccine.
Recombinant shingles vaccine
Recombinant shingles vaccine provides strong protection against shingles. By preventing shingles, recombinant shingles vaccine also protects against PHN and other complications.
Recombinant shingles vaccine is recommended for:
Adults 50 years and older
Adults 19 years and older who have a weakened immune system because of disease or treatments
Shingles vaccine is given as a two-dose series. For most people, the second dose should be given 2 to 6 months after the first dose. Some people who have or will have a weakened immune system can get the second dose 1 to 2 months after the first dose. Ask your health care provider for guidance.
People who have had shingles in the past and people who have received varicella (chickenpox) vaccine are recommended to get recombinant shingles vaccine. The vaccine is also recommended for people who have already gotten another type of shingles vaccine, the live shingles vaccine. There is no live virus in recombinant shingles vaccine.
Shingles vaccine may be given at the same time as other vaccines.
Talk with your health care provider
Tell your vaccination provider if the person getting the vaccine:
Has had an allergic reaction after a previous dose of recombinant shingles vaccine, or has any severe, life-threatening allergies
Is currently experiencing an episode of shingles
Is pregnant
In some cases, your health care provider may decide to postpone shingles vaccination until a future visit.
People with minor illnesses, such as a cold, may be vaccinated. People who are moderately or severely ill should usually wait until they recover before getting recombinant shingles vaccine.
Your health care provider can give you more information.
Risks of a vaccine reaction
A sore arm with mild or moderate pain is very common after recombinant shingles vaccine. Redness and swelling can also happen at the site of the injection.
Tiredness, muscle pain, headache, shivering, fever, stomach pain, and nausea are common after recombinant shingles vaccine.
These side effects may temporarily prevent a vaccinated person from doing regular activities. Symptoms usually go away on their own in 2 to 3 days. You should still get the second dose of recombinant shingles vaccine even if you had one of these reactions after the first dose.
Guillain-Barré syndrome (GBS), a serious nervous system disorder, has been reported very rarely after recombinant zoster vaccine.
People sometimes faint after medical procedures, including vaccination. Tell your provider if you feel dizzy or have vision changes or ringing in the ears.
As with any medicine, there is a very remote chance of a vaccine causing a severe allergic reaction, other serious injury, or death.
What if there is a serious problem?
An allergic reaction could occur after the vaccinated person leaves the clinic. If you see signs of a severe allergic reaction (hives, swelling of the face and throat, difficulty breathing, a fast heartbeat, dizziness, or weakness), call 9-1-1 and get the person to the nearest hospital.
https://www.cdc.gov/vaccines/hcp/vis/vis-statements/shingles-recombinant.html |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | We used to have a 10-acre waterlocked property by Lake Erie, in Ohio, in a remote section of the shoreline. Recently, we sold half of the property to our neighbor. Now, we can only reach the road by passing through his property or using a boat. No other way is available. However, he has been creating problems for us, because he doesn't us to use the road. He even made a fence in the middle of it. He says that it's his property and he can do whatever he wants with it. Can we prevail? Answer in 150 words. | Such stipulation, which is neither as complete nor satisfactory as could be desired, shows that on May 13, 1867, one Mary Lane acquired by deeds lands which embrace the properties now owned by the plaintiffs and defendant, respectively, plus a strip of land 33 feet in width running from the southeast corner of the 25-acre tract, now owned by plaintiffs, east to the center of a thoroughfare called Cahoon road.
By deed recorded August 10, 1881, this same Mary Lane acquired title to another strip of land 33 feet wide and extending east from the northeast corner of plaintiffs' present land to the center of Cahoon road, which was used until the year 1928 for the purpose of ingress and egress.
By deed recorded September 5, 1881, Mary Lane conveyed to the New York, Chicago St. Louis Railroad Company a right of way which effected a complete separation of the lands now owned by plaintiffs and defendant.
Thus, in 1881, a condition was brought about whereby the original parcel of land was divided by a railroad right of way with two strips of land 33 feet wide and extending from Cahoon road to the 25-acre tract lying south of the railroad right of way and now belonging to plaintiffs.
The property involved in the instant controversy continued to be owned by Mary Lane and her heirs until February 19, 1921, when the heirs conveyed the same to two persons named Dodd and Aldrich. In the conveyance there were three separate descriptions, one description included plaintiffs' present property, another defendant's present property and the remaining one the strip of land 33 feet wide and extending from the northeast corner of plaintiffs' premises to the center of Cahoon road.
Sometime during the year 1921 Dodd and Aldrich constructed a crossing seven feet wide over the tracks and right of way of the railroad and connecting the premises now owned by plaintiffs with those now owned by defendant. Such railroad crossing was used by Dodd and Aldrich from the year 1922, and upon the establishment of Forest Drive in 1925 they traveled across the land now owned by defendant along a line between the railroad crossing and the south end of Forest Drive. The nature and extent of such use are not disclosed, but it apparently continued for an undisclosed purpose until the separate and distinct tax sales in 1940.
By the present action plaintiffs seek to enjoin the defendant from interfering with their use of the passage or alleged easement from their land across his land to Forest Drive.
An easement has been defined as "a right without profit, created by grant or prescription, which the owner of one estate [called the dominant estate] may exercise in or over the estate of another [called the servient estate] for the benefit of the former." Yeager v. Tuning, 79 Ohio St. 121, 124, 86 N.E. 657, 658, 19 L.R.A. (N.S.), 700, 128 Am. St. Rep., 679.
An easement may be acquired only by grant, express or implied, or by prescription.
Where, however, the easement sought to be enforced is grounded upon implication rather than express grant, it must be clearly established that such a right exists. Implied easements are not favored because they are in derogation of the rule that written instruments speak for themselves. Ciski v. Wentworth, 122 Ohio St. 487, 172 N.E. 276.
An implied easement is based upon the theory that whenever one conveys property he includes in the conveyance whatever is necessary for its beneficial use and enjoyment and retains whatever is necessary for the use and enjoyment of the land retained.
There being in this case no express grant of an easement, it becomes necessary to determine whether one arose by implication.
Easements may be implied in several ways — from an existing use at the time of the severance of ownership in land, from a conveyance describing the premises as bounded upon a way, from a conveyance with reference to a plat or map or from necessity alone, as in the case of ways of necessity. 15 Ohio Jurisprudence, 37, Section 27.
Here, we are concerned only with the first and last of these methods, namely, a use existing at the time of severance or a way of necessity.
It is a well settled rule that a use must be continuous, apparent, permanent and necessary to be the basis of an implied easement upon the severance of the ownership of an estate. 28 Corpus Juris Secundum, Easements, 691, Section 33; and 15 Ohio Jurisprudence, 37, 45, Sections 28, 33.
For a use to be permanent in character "it is required that the use shall have been so long continued prior to severance and so obvious as to show that it was meant to be permanent; a mere temporary provision or arrangement made for the convenience of the entire estate will not constitute that degree of permanency required to burden the property with a continuance of the same when divided or separated by conveyance to different parties." 28 Corpus Juris Secundum, Easements, 691, 692, Section 33; and 15 Ohio Jurisprudence, 41, Section 31.
Plaintiffs having failed, then, to present facts sufficient to warrant the finding of an implied easement from an existing use, we come to a consideration of whether the facts disclosed are such as to sustain a way of necessity.
An implied easement or way of necessity is based upon the theory that without it the grantor or grantee, as the case may be, can not make use of his land. It has been stated that "necessity does not of itself create a right of way, but is said to furnish evidence of the grantor's intention to convey a right of way and, therefore, raises an implication of grant." 17 American Jurisprudence, 961, Section 48.
A way of necessity will not be implied where the claimant has another means of ingress or egress, whether over his own land or over the land of another.
For over 40 years thereafter there was no connection between these lands. As already noted, up to the year 1928 the strip of land 33 feet wide, still in the names of Dodd and Aldrich and connecting Cahoon road with the northeast corner of plaintiffs' property, was used as a way of travel to and from such property.
In our opinion plaintiffs do have a means of access to their lands from Cahoon road over the strip of ground 33 feet wide, referred to above, now belonging to those in plaintiffs' chain of title, and this being so they are not in a position to successfully assert an easement or way of necessity over defendant's property.
A way of necessity will not be implied, where there is another or other outlets available to a public thoroughfare, even though such other outlets are less convenient and would necessitate the expenditure of a considerable sum of money to render them serviceable. 15 Ohio Jurisprudence, 62, Section 44.
"A way of necessity will not be decreed unless the evidence showing the need therefor is clear and convincing. Such a way is not sanctioned when there is available another means of ingress and egress to and from the claimant's land even though it may be less convenient and will involve some labor and expense to repair and maintain."
Although it would be much more convenient and much less expensive for plaintiffs to traverse defendant's property to reach a public street, the imposition of such a burden on defendant's land on the theory of a way of necessity is legally unwarranted in the circumstances exhibited by the record.
The judgment of the Court of Appeals is, therefore, reversed and final judgment rendered for defendant. | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
We used to have a 10-acre waterlocked property by Lake Erie, in Ohio, in a remote section of the shoreline. Recently, we sold half of the property to our neighbor. Now, we can only reach the road by passing through his property or using a boat. No other way is available. However, he has been creating problems for us, because he doesn't us to use the road. He even made a fence in the middle of it. He says that it's his property and he can do whatever he wants with it. Can we prevail? Answer in 150 words.
{passage 0}
==========
Such stipulation, which is neither as complete nor satisfactory as could be desired, shows that on May 13, 1867, one Mary Lane acquired by deeds lands which embrace the properties now owned by the plaintiffs and defendant, respectively, plus a strip of land 33 feet in width running from the southeast corner of the 25-acre tract, now owned by plaintiffs, east to the center of a thoroughfare called Cahoon road.
By deed recorded August 10, 1881, this same Mary Lane acquired title to another strip of land 33 feet wide and extending east from the northeast corner of plaintiffs' present land to the center of Cahoon road, which was used until the year 1928 for the purpose of ingress and egress.
By deed recorded September 5, 1881, Mary Lane conveyed to the New York, Chicago St. Louis Railroad Company a right of way which effected a complete separation of the lands now owned by plaintiffs and defendant.
Thus, in 1881, a condition was brought about whereby the original parcel of land was divided by a railroad right of way with two strips of land 33 feet wide and extending from Cahoon road to the 25-acre tract lying south of the railroad right of way and now belonging to plaintiffs.
The property involved in the instant controversy continued to be owned by Mary Lane and her heirs until February 19, 1921, when the heirs conveyed the same to two persons named Dodd and Aldrich. In the conveyance there were three separate descriptions, one description included plaintiffs' present property, another defendant's present property and the remaining one the strip of land 33 feet wide and extending from the northeast corner of plaintiffs' premises to the center of Cahoon road.
Sometime during the year 1921 Dodd and Aldrich constructed a crossing seven feet wide over the tracks and right of way of the railroad and connecting the premises now owned by plaintiffs with those now owned by defendant. Such railroad crossing was used by Dodd and Aldrich from the year 1922, and upon the establishment of Forest Drive in 1925 they traveled across the land now owned by defendant along a line between the railroad crossing and the south end of Forest Drive. The nature and extent of such use are not disclosed, but it apparently continued for an undisclosed purpose until the separate and distinct tax sales in 1940.
By the present action plaintiffs seek to enjoin the defendant from interfering with their use of the passage or alleged easement from their land across his land to Forest Drive.
An easement has been defined as "a right without profit, created by grant or prescription, which the owner of one estate [called the dominant estate] may exercise in or over the estate of another [called the servient estate] for the benefit of the former." Yeager v. Tuning, 79 Ohio St. 121, 124, 86 N.E. 657, 658, 19 L.R.A. (N.S.), 700, 128 Am. St. Rep., 679.
An easement may be acquired only by grant, express or implied, or by prescription.
Where, however, the easement sought to be enforced is grounded upon implication rather than express grant, it must be clearly established that such a right exists. Implied easements are not favored because they are in derogation of the rule that written instruments speak for themselves. Ciski v. Wentworth, 122 Ohio St. 487, 172 N.E. 276.
An implied easement is based upon the theory that whenever one conveys property he includes in the conveyance whatever is necessary for its beneficial use and enjoyment and retains whatever is necessary for the use and enjoyment of the land retained.
There being in this case no express grant of an easement, it becomes necessary to determine whether one arose by implication.
Easements may be implied in several ways — from an existing use at the time of the severance of ownership in land, from a conveyance describing the premises as bounded upon a way, from a conveyance with reference to a plat or map or from necessity alone, as in the case of ways of necessity. 15 Ohio Jurisprudence, 37, Section 27.
Here, we are concerned only with the first and last of these methods, namely, a use existing at the time of severance or a way of necessity.
It is a well settled rule that a use must be continuous, apparent, permanent and necessary to be the basis of an implied easement upon the severance of the ownership of an estate. 28 Corpus Juris Secundum, Easements, 691, Section 33; and 15 Ohio Jurisprudence, 37, 45, Sections 28, 33.
For a use to be permanent in character "it is required that the use shall have been so long continued prior to severance and so obvious as to show that it was meant to be permanent; a mere temporary provision or arrangement made for the convenience of the entire estate will not constitute that degree of permanency required to burden the property with a continuance of the same when divided or separated by conveyance to different parties." 28 Corpus Juris Secundum, Easements, 691, 692, Section 33; and 15 Ohio Jurisprudence, 41, Section 31.
Plaintiffs having failed, then, to present facts sufficient to warrant the finding of an implied easement from an existing use, we come to a consideration of whether the facts disclosed are such as to sustain a way of necessity.
An implied easement or way of necessity is based upon the theory that without it the grantor or grantee, as the case may be, can not make use of his land. It has been stated that "necessity does not of itself create a right of way, but is said to furnish evidence of the grantor's intention to convey a right of way and, therefore, raises an implication of grant." 17 American Jurisprudence, 961, Section 48.
A way of necessity will not be implied where the claimant has another means of ingress or egress, whether over his own land or over the land of another.
For over 40 years thereafter there was no connection between these lands. As already noted, up to the year 1928 the strip of land 33 feet wide, still in the names of Dodd and Aldrich and connecting Cahoon road with the northeast corner of plaintiffs' property, was used as a way of travel to and from such property.
In our opinion plaintiffs do have a means of access to their lands from Cahoon road over the strip of ground 33 feet wide, referred to above, now belonging to those in plaintiffs' chain of title, and this being so they are not in a position to successfully assert an easement or way of necessity over defendant's property.
A way of necessity will not be implied, where there is another or other outlets available to a public thoroughfare, even though such other outlets are less convenient and would necessitate the expenditure of a considerable sum of money to render them serviceable. 15 Ohio Jurisprudence, 62, Section 44.
"A way of necessity will not be decreed unless the evidence showing the need therefor is clear and convincing. Such a way is not sanctioned when there is available another means of ingress and egress to and from the claimant's land even though it may be less convenient and will involve some labor and expense to repair and maintain."
Although it would be much more convenient and much less expensive for plaintiffs to traverse defendant's property to reach a public street, the imposition of such a burden on defendant's land on the theory of a way of necessity is legally unwarranted in the circumstances exhibited by the record.
The judgment of the Court of Appeals is, therefore, reversed and final judgment rendered for defendant.
https://casetext.com/case/trattar-v-rausch |
Do not use any external knowledge, base your answers only on the provided context block. Your role is to explain legal concepts in an easily accessible manner. Do not use pleasantries or filler text, answer the user’s question directly. | Tell me about NetChoice's legal battle over H.B. 20. | NetChoice’s Challenge to Florida’s S.B. 7072
Florida’s S.B. 7072 imposes restrictions on any information service, system, Internet search engine, or
access software provider that enables access by multiple users to a computer server, is organized as a legal
entity, does business in Florida, and satisfies certain specified user- or revenue-based thresholds. Thus,
while the litigation about the law emphasized the limitations it imposed on social media platforms, the
law applied more broadly. NetChoice challenged restrictions that generally fall into two categories:
content moderation restrictions and individualized-explanation requirements.
The Supreme Court’s analysis in Moody focused on the content moderation restrictions. Those provisions
limit the ability of covered platforms to delete content, make content less visible to other users, or ban
users. Under S.B. 7072, platforms may not “deplatform” a political candidate or deprioritize a candidate’s
or “journalistic enterprise’s” posts. They must “apply censorship, deplatforming, and shadow banning
standards in a consistent manner,” and they cannot change the rules or terms that apply to users more than
once every 30 days. Deplatforming occurs when a platform bans a user for at least 14 days. Shadow
banning occurs when a platform deletes a user’s content or makes the account’s content less visible to
other users.
Before S.B. 7072 took effect, NetChoice sued, alleging that the content moderation provisions, on their
face, violate the First Amendment. The U.S. Court of Appeals for the Eleventh Circuit affirmed a
preliminary injunction barring enforcement of the content moderation provisions while NetChoice’s
challenge is litigated. The court held that the provisions likely “trigger[] First Amendment scrutiny
because [S.B. 7072] restricts social-media platforms’ exercise of editorial judgment.” It decided that the
challenged provisions likely fail constitutional scrutiny because they lack a “substantial or compelling
interest that would justify [the provisions’] significant restrictions on platforms’ editorial judgment.”
NetChoice’s Challenge to Texas’s H.B. 20
Texas’s H.B. 20 applies to social media platforms with more than 50 million monthly active users in the
United States. The law defines social media platforms as public websites or applications that enable users
to create accounts and communicate for the primary purpose of posting user-generated information.
Internet service providers, email providers, and websites “that consist primarily of news, sports,
entertainment, or other” content that is not user generated are excluded from the definition.
As with Florida’s law, H.B. 20 limits when covered platforms may delete or restrict access to user-posted
content. Subject to enumerated exceptions, covered platforms are prohibited from censoring a user’s
content based on viewpoint or the user’s geographic location in Texas. Censor is defined to mean
“block[ing], ban[ning], remove[ing], deplatform[ing], demonetiz[ing], de-boost[ing], restrict[ing],
deny[ing] equal access or visibility to, or otherwise discriminat[ing] against expression.”
Again, NetChoice challenged H.B. 20’s content moderation provisions on their face and asked a court to
enjoin their enforcement before the law took effect. The U.S. Court of Appeals for the Fifth Circuit denied
the request. Expressly disagreeing with the Eleventh Circuit’s reasoning about Florida’s law, the Fifth
Circuit held that Texas’s content moderation provisions do not likely implicate First Amendment rights.
According to the Fifth Circuit, NetChoice was seeking to assert a “right to censor what people say” that is
not protected by the First Amendment. In the alternative, the court held that, even if the law restricted
protected expression, it is a content- and viewpoint-neutral law—so subject to intermediate scrutiny—and
Texas’s interest in protecting the free exchange of ideas is sufficiently important to satisfy that standard. | Do not use any external knowledge, base your answers only on the provided context block. Your role is to explain legal concepts in an easily accessible manner. Do not use pleasantries or filler text, answer the user’s question directly.
NetChoice’s Challenge to Florida’s S.B. 7072
Florida’s S.B. 7072 imposes restrictions on any information service, system, Internet search engine, or
access software provider that enables access by multiple users to a computer server, is organized as a legal
entity, does business in Florida, and satisfies certain specified user- or revenue-based thresholds. Thus,
while the litigation about the law emphasized the limitations it imposed on social media platforms, the
law applied more broadly. NetChoice challenged restrictions that generally fall into two categories:
content moderation restrictions and individualized-explanation requirements.
The Supreme Court’s analysis in Moody focused on the content moderation restrictions. Those provisions
limit the ability of covered platforms to delete content, make content less visible to other users, or ban
users. Under S.B. 7072, platforms may not “deplatform” a political candidate or deprioritize a candidate’s
or “journalistic enterprise’s” posts. They must “apply censorship, deplatforming, and shadow banning
standards in a consistent manner,” and they cannot change the rules or terms that apply to users more than
once every 30 days. Deplatforming occurs when a platform bans a user for at least 14 days. Shadow
banning occurs when a platform deletes a user’s content or makes the account’s content less visible to
other users.
Before S.B. 7072 took effect, NetChoice sued, alleging that the content moderation provisions, on their
face, violate the First Amendment. The U.S. Court of Appeals for the Eleventh Circuit affirmed a
preliminary injunction barring enforcement of the content moderation provisions while NetChoice’s
challenge is litigated. The court held that the provisions likely “trigger[] First Amendment scrutiny
because [S.B. 7072] restricts social-media platforms’ exercise of editorial judgment.” It decided that the
challenged provisions likely fail constitutional scrutiny because they lack a “substantial or compelling
interest that would justify [the provisions’] significant restrictions on platforms’ editorial judgment.”
NetChoice’s Challenge to Texas’s H.B. 20
Texas’s H.B. 20 applies to social media platforms with more than 50 million monthly active users in the
United States. The law defines social media platforms as public websites or applications that enable users
to create accounts and communicate for the primary purpose of posting user-generated information.
Internet service providers, email providers, and websites “that consist primarily of news, sports,
entertainment, or other” content that is not user generated are excluded from the definition.
As with Florida’s law, H.B. 20 limits when covered platforms may delete or restrict access to user-posted
content. Subject to enumerated exceptions, covered platforms are prohibited from censoring a user’s
content based on viewpoint or the user’s geographic location in Texas. Censor is defined to mean
“block[ing], ban[ning], remove[ing], deplatform[ing], demonetiz[ing], de-boost[ing], restrict[ing],
deny[ing] equal access or visibility to, or otherwise discriminat[ing] against expression.”
Again, NetChoice challenged H.B. 20’s content moderation provisions on their face and asked a court to
enjoin their enforcement before the law took effect. The U.S. Court of Appeals for the Fifth Circuit denied
the request. Expressly disagreeing with the Eleventh Circuit’s reasoning about Florida’s law, the Fifth
Circuit held that Texas’s content moderation provisions do not likely implicate First Amendment rights.
According to the Fifth Circuit, NetChoice was seeking to assert a “right to censor what people say” that is
not protected by the First Amendment. In the alternative, the court held that, even if the law restricted
protected expression, it is a content- and viewpoint-neutral law—so subject to intermediate scrutiny—and
Texas’s interest in protecting the free exchange of ideas is sufficiently important to satisfy that standard.
Tell me about NetChoice's legal battle over H.B. 20. |
Use only the information provided to answer. Do not use outside sources or internal knowledge. Use bullet point format in chronological order. | Please list the dates and their significance. | The Court of Arbitration for Sport (CAS) has issued the operative part of its decision in the appeal arbitration procedures CAS 2023/A/10025 Simona Halep v. International Tennis Integrity Agency (ITIA) and CAS 2023/A/10227 International Tennis Integrity Agency (ITIA) v. Simona Halep: The appeal procedures before the CAS concerned two separate charges:
1. a charge which arose from a prohibited substance (Roxadustat) being detected in a urine sample collected from Simona Halep on 29 August 2022 during the US Open; and
2. a charge that Ms Halep’s Athlete Biological Passport (ABP), in particular a blood sample given by Ms Halep on 22 September 2022, established use of a prohibited substance and/or prohibited method. In its decision dated 22 September 2023, the International Tennis Federation (ITF) Independent Tribunal found
Ms Halep guilty of both Anti-doping Rule Violations (ADRV) and imposed a four-year period of ineligibility on her. In the appeal filed by Simona Halep at the CAS against the first instance Decision, Ms Halep requested that the sanction be reduced and be no longer than the period of the provisional suspension already served. In its separate appeal, the ITIA requested that the CAS sanction Ms Halep’s ADRVs together as one single violation based on the violation that carried the most severe sanction, and the imposition of a period of ineligibility of between four and six years.
The CAS appeal arbitration proceedings involved intensive pre-hearing processes and a three-day hearing which took place on 7-9 February 2024 in Lausanne, Switzerland. The CAS Panel heard from many lay and expert witnesses, most of whom were present in person at the hearing. The CAS Panel has unanimously determined that the four-year period of ineligibility imposed by the ITF Independent Tribunal is to be reduced to a period of ineligibility of nine (9) months starting on 7 October 2022, which period expired on 6 July 2023. As that period expired before the appeal procedures were even lodged with the CAS, the CAS Panel has determined it appropriate to issue the operative part of the Arbitral Award as soon as practicable, together with a comprehensive media release. The CAS Panel has also ordered the disqualification of all competitive results achieved by Ms. Halep from 29 August 2022 (the date of her positive sample) to 7 October 2022, including forfeiture of any medals, titles,
ranking points and prize money. Therefore, the appeal filed by the ITIA is dismissed and the appeal filed by Simona Halep is partially upheld (her request to backdate the start of the suspension on 29 August 2022 is dismissed).
Roxadustat charge
According to Articles 2.1 and 2.2 of the Tennis Anti-Doping Programme (“TADP”), it is each player’s personal duty to ensure that no prohibited substance enters their body and players are responsible for any prohibited substances found to be present in their samples. In this matter, a prohibited substance (i.e. Roxadustat) was found to be present in a sample collected from
Ms. Halep on 29 August 2022 during the US Open. Ms. Halep did not contest liability in that she accepted that, by reasons of the presence of Roxadustat in her sample, she had committed anti-doping rule violations under Articles 2.1 and 2.2 of the TADP. However, she objected to the intentional nature of the infraction and argued that the positive test was the result of contamination.
Having carefully considered all the evidence put before it, the CAS Panel determined that Ms. Halep had established, on the balance of probabilities, that the Roxadustat entered her body through the consumption of a contaminated supplement which she had used in the days shortly before 29 August 2022 and that the Roxadustat, as detected in her sample, came from that contaminated product. As a result, the CAS Panel determined that Ms. Halep had also established, on the balance of probabilities, that her anti-doping rule violations were not
intentional. Although the CAS Panel found that Ms. Halep did bear some level of fault or negligence for her violations, as she
did not exercise sufficient care when using the Keto MCT supplement, it concluded that she bore no significant
fault or negligence.
Athlete Biological Passport (ABP) charge
With respect to the charge concerning Ms. Halep’s ABP, the ITIA bore the onus of establishing (to the standard of comfortable satisfaction) that Ms. Halep had used a prohibited substance and/or prohibited method. It primarily relied on a blood sample given by Ms. Halep on 22 September 2022, the results of which it alleged demonstrated the anti-doping rule violation under Article 2.2 of the TADP. Contrary to the reasoning of the first instance tribunal, the CAS Panel determined that it was appropriate in the
circumstances to consider the results of a private blood sample given by Ms. Halep on 9 September 2022 in the context of a surgery which occurred shortly thereafter. Those results, and Ms. Halep’s public statements that she did not intend to compete for the remainder of the 2022 calendar year, impacted the plausibility of the doping scenarios relied upon by the ITF Independent Tribunal. Having regard to the evidence as a whole, the CAS Panel was not comfortably satisfied that an anti-doping rule violation under Article 2.2. of the TADP had occurred. It
therefore dismissed that charge. The CAS Panel has issued the following decision:
1. The appeal filed by Simona Halep on 28 September 2023 against the decision issued on 22 September 2023 by the ITF Independent Tribunal is admissible and is partially upheld.
2. The appeal filed by the International Tennis Integrity Agency (ITIA) on 14 December 2023 against the decision issued on 22 September 2023 by the ITF Independent Tribunal is admissible and is dismissed.
3. The decision issued on 22 September 2023 by the ITF Independent Tribunal is set aside.
4. Simona Halep is found to have committed Anti-Doping Rule Violations under Articles 2.1 (presence) and 2.2 (use) of the Tennis Anti-Doping Programme 2022 as a result of the presence of a Prohibited Substance (Roxadustat) in her urine sample collected In-Competition on 29 August 2022.
5. Simona Halep is sanctioned with a period of Ineligibility of nine (9) months, commencing on 7 October 2022.
6. Credit is given to Simona Halep for her provisional suspension served since 7 October 2022.
7. All results obtained by Simona Halep in competitions taking place in the period 29 August 2022 to 7 October 2022 are disqualified, with all resulting consequences, including forfeiture of any medals, titles, ranking points and prize money.
8. The award is pronounced without costs, except for the Court Office fees of CHF 1,000 (one thousand Swiss francs) paid by each of Simona Halep in respect of her appeal and the International Tennis Integrity Agency (ITIA) in respect of its appeal, which is retained by the CAS.
9. The International Tennis Integrity Agency (ITIA) is ordered to pay Simona Halep an amount of CHF 20,000 (twenty thousand Swiss francs) as a contribution towards her legal fees and other expenses incurred in connection with these arbitration proceedings. The reasoned award will be notified to the parties in due course. It will be published by CAS unless the parties
request confidentiality. | The Court of Arbitration for Sport (CAS) has issued the operative part of its decision in the appeal arbitration procedures CAS 2023/A/10025 Simona Halep v. International Tennis Integrity Agency (ITIA) and CAS 2023/A/10227 International Tennis Integrity Agency (ITIA) v. Simona Halep: The appeal procedures before the CAS concerned two separate charges:
1. a charge which arose from a prohibited substance (Roxadustat) being detected in a urine sample collected from Simona Halep on 29 August 2022 during the US Open; and
2. a charge that Ms Halep’s Athlete Biological Passport (ABP), in particular a blood sample given by Ms Halep on 22 September 2022, established use of a prohibited substance and/or prohibited method. In its decision dated 22 September 2023, the International Tennis Federation (ITF) Independent Tribunal found
Ms Halep guilty of both Anti-doping Rule Violations (ADRV) and imposed a four-year period of ineligibility on her. In the appeal filed by Simona Halep at the CAS against the first instance Decision, Ms Halep requested that the sanction be reduced and be no longer than the period of the provisional suspension already served. In its separate appeal, the ITIA requested that the CAS sanction Ms Halep’s ADRVs together as one single violation based on the violation that carried the most severe sanction, and the imposition of a period of ineligibility of between four and six years.
The CAS appeal arbitration proceedings involved intensive pre-hearing processes and a three-day hearing which took place on 7-9 February 2024 in Lausanne, Switzerland. The CAS Panel heard from many lay and expert witnesses, most of whom were present in person at the hearing. The CAS Panel has unanimously determined that the four-year period of ineligibility imposed by the ITF Independent Tribunal is to be reduced to a period of ineligibility of nine (9) months starting on 7 October 2022, which period expired on 6 July 2023. As that period expired before the appeal procedures were even lodged with the CAS, the CAS Panel has determined it appropriate to issue the operative part of the Arbitral Award as soon as practicable, together with a comprehensive media release. The CAS Panel has also ordered the disqualification of all competitive results achieved by Ms. Halep from 29 August 2022 (the date of her positive sample) to 7 October 2022, including forfeiture of any medals, titles,
ranking points and prize money. Therefore, the appeal filed by the ITIA is dismissed and the appeal filed by Simona Halep is partially upheld (her request to backdate the start of the suspension on 29 August 2022 is dismissed).
Roxadustat charge
According to Articles 2.1 and 2.2 of the Tennis Anti-Doping Programme (“TADP”), it is each player’s personal duty to ensure that no prohibited substance enters their body and players are responsible for any prohibited substances found to be present in their samples. In this matter, a prohibited substance (i.e. Roxadustat) was found to be present in a sample collected from
Ms. Halep on 29 August 2022 during the US Open. Ms. Halep did not contest liability in that she accepted that, by reasons of the presence of Roxadustat in her sample, she had committed anti-doping rule violations under Articles 2.1 and 2.2 of the TADP. However, she objected to the intentional nature of the infraction and argued that the positive test was the result of contamination.
Having carefully considered all the evidence put before it, the CAS Panel determined that Ms. Halep had established, on the balance of probabilities, that the Roxadustat entered her body through the consumption of a contaminated supplement which she had used in the days shortly before 29 August 2022 and that the Roxadustat, as detected in her sample, came from that contaminated product. As a result, the CAS Panel determined that Ms. Halep had also established, on the balance of probabilities, that her anti-doping rule violations were not
intentional. Although the CAS Panel found that Ms. Halep did bear some level of fault or negligence for her violations, as she
did not exercise sufficient care when using the Keto MCT supplement, it concluded that she bore no significant
fault or negligence.
Athlete Biological Passport (ABP) charge
With respect to the charge concerning Ms. Halep’s ABP, the ITIA bore the onus of establishing (to the standard of comfortable satisfaction) that Ms. Halep had used a prohibited substance and/or prohibited method. It primarily relied on a blood sample given by Ms. Halep on 22 September 2022, the results of which it alleged demonstrated the anti-doping rule violation under Article 2.2 of the TADP. Contrary to the reasoning of the first instance tribunal, the CAS Panel determined that it was appropriate in the
circumstances to consider the results of a private blood sample given by Ms. Halep on 9 September 2022 in the context of a surgery which occurred shortly thereafter. Those results, and Ms. Halep’s public statements that she did not intend to compete for the remainder of the 2022 calendar year, impacted the plausibility of the doping scenarios relied upon by the ITF Independent Tribunal. Having regard to the evidence as a whole, the CAS Panel was not comfortably satisfied that an anti-doping rule violation under Article 2.2. of the TADP had occurred. It
therefore dismissed that charge. The CAS Panel has issued the following decision:
1. The appeal filed by Simona Halep on 28 September 2023 against the decision issued on 22 September 2023 by the ITF Independent Tribunal is admissible and is partially upheld.
2. The appeal filed by the International Tennis Integrity Agency (ITIA) on 14 December 2023 against the decision issued on 22 September 2023 by the ITF Independent Tribunal is admissible and is dismissed.
3. The decision issued on 22 September 2023 by the ITF Independent Tribunal is set aside.
4. Simona Halep is found to have committed Anti-Doping Rule Violations under Articles 2.1 (presence) and 2.2 (use) of the Tennis Anti-Doping Programme 2022 as a result of the presence of a Prohibited Substance (Roxadustat) in her urine sample collected In-Competition on 29 August 2022.
5. Simona Halep is sanctioned with a period of Ineligibility of nine (9) months, commencing on 7 October 2022.
6. Credit is given to Simona Halep for her provisional suspension served since 7 October 2022.
7. All results obtained by Simona Halep in competitions taking place in the period 29 August 2022 to 7 October 2022 are disqualified, with all resulting consequences, including forfeiture of any medals, titles, ranking points and prize money.
8. The award is pronounced without costs, except for the Court Office fees of CHF 1,000 (one thousand Swiss francs) paid by each of Simona Halep in respect of her appeal and the International Tennis Integrity Agency (ITIA) in respect of its appeal, which is retained by the CAS.
9. The International Tennis Integrity Agency (ITIA) is ordered to pay Simona Halep an amount of CHF 20,000 (twenty thousand Swiss francs) as a contribution towards her legal fees and other expenses incurred in connection with these arbitration proceedings. The reasoned award will be notified to the parties in due course. It will be published by CAS unless the parties
request confidentiality.
Please list the dates and their significance.
Use only the information provided to answer. Do not use outside sources or internal knowledge. Use bullet point format in chronological order. |
Only use information from the document to answer questions. At the end of each response include a list of all other documents referenced in the input document. If any URL's or contact information is included, make sure that information is listed at the end of the response. | User Input: Write two, roughly 200 word paragraphs listing at least four ways PII can be leaked. | 4.3 Protecting Data on Telework Client Devices
Telework often involves creating and editing work-related information such as email, word processing
documents, and spreadsheets. Because that data is important, it should be treated like other important
assets of the organization. Two things an organization can do to protect data on telework devices are to
secure it on the telework device and to periodically back it up to a location controlled by the organization.
More information on this is provided in Sections 4.3.1 through 4.3.3. Organizations can also choose not to
allow the organization’s information to be stored on telework devices, but to instead store it centrally at
the organization.
Sensitive information, such as certain types of personally identifiable information (PII) (e.g., personnel
records, medical records, financial records), that is stored on or sent to or from telework devices should be
protected so that malicious parties cannot access or alter it. For example, teleworkers often forget that
32 For more information on application whitelisting, see NIST SP 800-167, Guide to Application Whitelisting
(http://dx.doi.org/10.6028/NIST.SP.800-167).
NIST SP 800-46 REV. 2 GUIDE TO ENTERPRISE TELEWORK,
REMOTE ACCESS, AND BYOD SECURITY
26
This publication is available free of charge from: http://dx.doi.org/10.6028/NIST.SP.800-46r2
storing sensitive information on a CD that is carried with their device, or printing the information on a
public printer, can also expose the information in ways that are not significant within a typical enterprise
environment. An unauthorized release of sensitive information could damage the public’s trust in an
organization, jeopardize the organization’s mission, or harm individuals if their personal information has
been released.
4.3.1 Encrypting Data at Rest
All telework devices, regardless of their size or location, can be stolen. Some thieves may want to read
the contents of the data on the device, and quite possibly use that data for criminal purposes. To prevent
this, an organization should have a policy of encrypting all sensitive data when it is at rest on the device
and on removable media used by the device. The creation and use of cryptographic keys for encrypting
remote data at rest should follow the same policies that an organization has for other keys that protect data
at rest.33
There are many methods for protecting data at rest, and they mostly depend on the type of device or
removable media that is being protected. Most operating systems have their own data encryption
mechanisms, and there are also numerous third-party applications that provide similar capabilities.
34
Generally, when technologies such as full disk encryption are being used to protect data at rest on PCs,
teleworkers should shut down their telework devices instead of placing them into sleep mode when the
devices will not be used for an extended time or when the teleworker will not be with the device. This
helps ensure that the data at rest and the decryption key are protected by the storage encryption
technology. | Only use information from the document to answer questions. At the end of each response include a list of all other documents referenced in the input document. If any URL's or contact information is included, make sure that information is listed at the end of the response.
Input Document: 4.3 Protecting Data on Telework Client Devices
Telework often involves creating and editing work-related information such as email, word processing
documents, and spreadsheets. Because that data is important, it should be treated like other important
assets of the organization. Two things an organization can do to protect data on telework devices are to
secure it on the telework device and to periodically back it up to a location controlled by the organization.
More information on this is provided in Sections 4.3.1 through 4.3.3. Organizations can also choose not to
allow the organization’s information to be stored on telework devices, but to instead store it centrally at
the organization.
Sensitive information, such as certain types of personally identifiable information (PII) (e.g., personnel
records, medical records, financial records), that is stored on or sent to or from telework devices should be
protected so that malicious parties cannot access or alter it. For example, teleworkers often forget that
32 For more information on application whitelisting, see NIST SP 800-167, Guide to Application Whitelisting
(http://dx.doi.org/10.6028/NIST.SP.800-167).
NIST SP 800-46 REV. 2 GUIDE TO ENTERPRISE TELEWORK,
REMOTE ACCESS, AND BYOD SECURITY
26
This publication is available free of charge from: http://dx.doi.org/10.6028/NIST.SP.800-46r2
storing sensitive information on a CD that is carried with their device, or printing the information on a
public printer, can also expose the information in ways that are not significant within a typical enterprise
environment. An unauthorized release of sensitive information could damage the public’s trust in an
organization, jeopardize the organization’s mission, or harm individuals if their personal information has
been released.
4.3.1 Encrypting Data at Rest
All telework devices, regardless of their size or location, can be stolen. Some thieves may want to read
the contents of the data on the device, and quite possibly use that data for criminal purposes. To prevent
this, an organization should have a policy of encrypting all sensitive data when it is at rest on the device
and on removable media used by the device. The creation and use of cryptographic keys for encrypting
remote data at rest should follow the same policies that an organization has for other keys that protect data
at rest.33
There are many methods for protecting data at rest, and they mostly depend on the type of device or
removable media that is being protected. Most operating systems have their own data encryption
mechanisms, and there are also numerous third-party applications that provide similar capabilities.
34
Generally, when technologies such as full disk encryption are being used to protect data at rest on PCs,
teleworkers should shut down their telework devices instead of placing them into sleep mode when the
devices will not be used for an extended time or when the teleworker will not be with the device. This
helps ensure that the data at rest and the decryption key are protected by the storage encryption
technology.
User Input: Write two, roughly 200 word paragraphs listing at least four ways PII can be leaked. |
For this task, you may only consult the information given in the prompt. No outside sources or prior knowledge may be used.
The response should be given as a list with bullet points. Each list item should comprise a single sentence of no more than 20 words. | What types of attacks does the text identify that the 6G network may face? | Minimum Baseline Security Standard (MBSS) and Autonomous Security Assurance
The structural heterogeneity and distribution of the 6G network, coupled with the diverse ecosystem in
computing nodes and devices, results in a coarse degree of data access management. This may lead to a
malicious actor being able to penetrate the security of the edge device and so compromise this aspect of
the system. Untrusted computing nodes joining the network may hack user data at the edge of the network
and interrupt the operation. Additionally, because of the performance limitations of edge nodes, these
devices cannot resist network attacks, such as man-in-the-middle and denial-of-service, which lead to the
breakdown of the edge network and instability18
.
In the case of 6G, building a secure supply chain is vital, vendor compliance is a must and security assurance
[GSMA NESAS-2.0, ISO], OWASP vulnerability19, the integrity of any third-party elements - together with
trust and privacy - is also extremely important. Attacks and issues that compromise privacy and security
often occur in three main areas of the network: the infrastructure layer security, the network layer security,
and the application-level security (which consists of User plane traffic, Control plane traffic and
Management plane traffic20).
Establishing a reliable level of security policies, procedures, and Minimum Baseline Security Standard
(MBSS) for all network functions is extremely important to minimize risks21. There is a need for centralized
identity governance for resource management and user access – the lack of which may cause network
exploitation of applications and systems, leading to unauthorized access of user data, log files and
manipulation of AI/ML models. A prominent example is poisoning and backdoor attacks for manipulating
the data used for training an AI model, with countermeasures for prevention and detection including use
of data from trusted sources, protecting the supply chain and sanitizing data. Another attack type are
adversarial attacks that target the model in operation by using specially crafted inputs to mislead the model.
Such attacks can be mitigated by expanding the training process (adversarial training), introducing
additional modules for detecting unusual ingests and sanitizing input data. Attacks that compromise the
confidentiality and privacy of the training data or the model’s parameters can be addressed with techniques
like differential privacy and homomorphic encryption. Additionally, restricting the number and type of
queries to the model and tailoring query outputs can help mitigate these risks22
.
Other attacks jeopardize the confidentiality and privacy of the data used to train the model or the model’s
parameters. They can be dealt with by approaches such as: differential privacy and homomorphic
encryption, introducing restrictions on the number and type of queries to the model and tailoring the
output to queries. Therefore, a Unified Framework (UF) is necessary to prevent attacks on the AI/ML model,
with a centralized assurance procedure used for evaluation and assessment, before moving it to
production. Then, on a regular basis, the model should be evaluated to ensure it provides the desired
functionality and is sufficiently robust to changes in input data both natural and (potentially) adversarial. | System instruction:
For this task, you may only consult the information given in the prompt. No outside sources or prior knowledge may be used.
The response should be given as a list with bullet points. Each list item should comprise a single sentence of no more than 20 words.
Question:
What types of attacks does the text identify that the 6G network may face?
Context:
Minimum Baseline Security Standard (MBSS) and Autonomous Security Assurance
The structural heterogeneity and distribution of the 6G network, coupled with the diverse ecosystem in
computing nodes and devices, results in a coarse degree of data access management. This may lead to a
malicious actor being able to penetrate the security of the edge device and so compromise this aspect of
the system. Untrusted computing nodes joining the network may hack user data at the edge of the network
and interrupt the operation. Additionally, because of the performance limitations of edge nodes, these
devices cannot resist network attacks, such as man-in-the-middle and denial-of-service, which lead to the
breakdown of the edge network and instability18
.
In the case of 6G, building a secure supply chain is vital, vendor compliance is a must and security assurance
[GSMA NESAS-2.0, ISO], OWASP vulnerability19, the integrity of any third-party elements - together with
trust and privacy - is also extremely important. Attacks and issues that compromise privacy and security
often occur in three main areas of the network: the infrastructure layer security, the network layer security,
and the application-level security (which consists of User plane traffic, Control plane traffic and
Management plane traffic20).
Establishing a reliable level of security policies, procedures, and Minimum Baseline Security Standard
(MBSS) for all network functions is extremely important to minimize risks21. There is a need for centralized
identity governance for resource management and user access – the lack of which may cause network
exploitation of applications and systems, leading to unauthorized access of user data, log files and
manipulation of AI/ML models. A prominent example is poisoning and backdoor attacks for manipulating
the data used for training an AI model, with countermeasures for prevention and detection including use
of data from trusted sources, protecting the supply chain and sanitizing data. Another attack type are
adversarial attacks that target the model in operation by using specially crafted inputs to mislead the model.
Such attacks can be mitigated by expanding the training process (adversarial training), introducing
additional modules for detecting unusual ingests and sanitizing input data. Attacks that compromise the
confidentiality and privacy of the training data or the model’s parameters can be addressed with techniques
like differential privacy and homomorphic encryption. Additionally, restricting the number and type of
queries to the model and tailoring query outputs can help mitigate these risks22
.
Other attacks jeopardize the confidentiality and privacy of the data used to train the model or the model’s
parameters. They can be dealt with by approaches such as: differential privacy and homomorphic
encryption, introducing restrictions on the number and type of queries to the model and tailoring the
output to queries. Therefore, a Unified Framework (UF) is necessary to prevent attacks on the AI/ML model,
with a centralized assurance procedure used for evaluation and assessment, before moving it to
production. Then, on a regular basis, the model should be evaluated to ensure it provides the desired
functionality and is sufficiently robust to changes in input data both natural and (potentially) adversarial. |
Don't rely on information outside of the provided text. Use paragraphs. If you can't form an answer, just say "I can't answer that." | Describe the DoD's defense-specific areas. | 2. Effective Adoption Areas - where there is existing vibrant commercial sector activity
Trusted AI and Autonomy
Artificial Intelligence (Al) is the software engineering discipline of expanding
capabilities of software applications to perform tasks that currently require human
intelligence. Machine learning is an engineering subfield of AI that trains software
models using example data, simulations, or real-world experiences rather than by direct
programming or coding. Autonomy is the engineering discipline that expands robots'
abilities to perform tasks while limiting the need for human interaction. AI holds
tremendous promise to improve the ability and function of nearly all systems and
operations. Trusted AI with trusted autonomous systems are imperative to dominate
future conflicts. As AI, machine learning, and autonomous operations continue to mature,
the DoD will focus on evidence-based AI-assurance and enabling operational
effectiveness.
Integrated Network Systems-of-Systems
Integrated Network Systems-of-Systems technology encompasses the capability to
communicate, provide real-time dissemination of information across the Department, and
effective command and control in a contested electromagnetic environment. Integrated
Network Systems-of-Systems capability must enable engagements by any sensor and
shooter, with the ability to integrate disparate systems. An interoperable network that
leverages emerging capabilities across the electromagnetic spectrum such as 5G, software
defined networking and radios, and modern information exchange techniques will allow
the Department to better integrate many diverse mission systems and provide fully
networked command, control, and communication that is capable, resilient, and secure.
Microelectronics
Microelectronics are circuits and components that serve as the "brain" to human-made
electronic functional systems. Virtually every military and commercial system relies on
microelectronics. Diminishing microelectronics manufacturing in the United States and
supply chain concerns have highlighted national economic and security risks. Working
closely with industry, academia, and across the Government, the Department is
addressing the need for secure microelectronics sources and will leverage state-of-the-art
commercial development and production for defense microelectronic solutions.
Space Technology
Space technologies include space flight, Space communication and other technologies
needed to maintain space operations. With rising threats and increasing dependence on
space-based systems, the Department's space strategy must shift away from exquisite
satellites to a more robust and proliferated architecture. Novel space technologies are
necessary to enable resilient cross-domain operations. The space strategy must
incorporate technologies that enhance the Department's adaptive and reconfigurable
capabilities in space situational awareness, space control, communication path diversity,
on-orbit processing, and autonomy.
Renewable Energy Generation and Storage
Renewable energy generation and storage includes solar wind, bio-based and geothermal
technologies, advanced energy storage, electronic engines, and power grid integration.
Renewable energy generation and storage promises to decrease warfighter vulnerability
and deliver new operational capabilities for the Department. From more efficient batteries
to diversifying energy sources and reduced fuel transportation risks, renewable energy
generation and storage will add resilience and flexibility in a contested logistics
environment.
Advanced Computing and Software
Advanced computing and software technologies include supercomputing, cloud
computing, data storage, computing architectures, and data processing. Software is
ubiquitous throughout the Department, but the speed at which software develops outpaces
the Department's ability to stay up to date. The Department must rapidly modernize its
legacy software systems with resilient, affordable, and assured new software that has
been designed, developed, and tested using processes that establish confidence in its
performance. The Department must migrate to a Development-Security-Operations
(DevSecOps) approach in its software development and evolve to a model of continuous
development, continuous test, and continuous delivery. The Department must leverage
modular open system architecture approaches to isolate hardware from software and
enable rapid upgrades to secure processors.
Human-Machine Interfaces
Human-Machine Interface refers to technologies related to human-machine teaming and
augmented and virtual reality. Rapid advancements in this technology will have a
multitude of benefits for our service members. Highly immersive realistic training
environments provide real-time feedback to enhance warfighter performance. Intuitive interactive human-machine interfaces enable rapid mission planning and mission
command by providing a common operational picture to geographically distributed
operations.
3. Defense-Specific Areas
Directed Energy
Directed Energy Weapons utilize lasers, high power microwaves, and high energy
particle beams to produce precision disruption, damage, or destruction of military targets
at range. Directed energy systems will allow the Department to counter a wide variety of
current and emerging threats with rapid responses and engagement at the speed of light.
High-power lasers and high-power microwave technologies both offer new ways to
counter diverse sets of threats.
Hypersonics
Hypersonic systems fly within the atmosphere for significant portions of their flight at or
above 5 times the speed of sound, or approximately 3700 miles per hour. Hypersonics
dramatically shorten the timeline to strike a target and increase unpredictability. While
strategic competitors are pursuing and rapidly fielding advanced hypersonic missiles, the
DoD will develop leap-ahead and cost-effective technologies for our air, land, and sea
operational forces.
Integrated Sensing and Cyber
To provide advantage for the joint force in highly contested environments, the
Department must develop wideband sensors to operate at the intersection of cyber space,
electronic warfare, radar, and communications. Sensors must be able to counter advanced
threats and can no longer be stove-piped and single function. | Don't rely on information outside of the provided text. Use paragraphs. If you can't form an answer, just say "I can't answer that." Describe the DoD's defense-specific areas.
2. Effective Adoption Areas - where there is existing vibrant commercial sector activity
Trusted AI and Autonomy
Artificial Intelligence (Al) is the software engineering discipline of expanding
capabilities of software applications to perform tasks that currently require human
intelligence. Machine learning is an engineering subfield of AI that trains software
models using example data, simulations, or real-world experiences rather than by direct
programming or coding. Autonomy is the engineering discipline that expands robots'
abilities to perform tasks while limiting the need for human interaction. AI holds
tremendous promise to improve the ability and function of nearly all systems and
operations. Trusted AI with trusted autonomous systems are imperative to dominate
future conflicts. As AI, machine learning, and autonomous operations continue to mature,
the DoD will focus on evidence-based AI-assurance and enabling operational
effectiveness.
Integrated Network Systems-of-Systems
Integrated Network Systems-of-Systems technology encompasses the capability to
communicate, provide real-time dissemination of information across the Department, and
effective command and control in a contested electromagnetic environment. Integrated
Network Systems-of-Systems capability must enable engagements by any sensor and
shooter, with the ability to integrate disparate systems. An interoperable network that
leverages emerging capabilities across the electromagnetic spectrum such as 5G, software
defined networking and radios, and modern information exchange techniques will allow
the Department to better integrate many diverse mission systems and provide fully
networked command, control, and communication that is capable, resilient, and secure.
Microelectronics
Microelectronics are circuits and components that serve as the "brain" to human-made
electronic functional systems. Virtually every military and commercial system relies on
microelectronics. Diminishing microelectronics manufacturing in the United States and
supply chain concerns have highlighted national economic and security risks. Working
closely with industry, academia, and across the Government, the Department is
addressing the need for secure microelectronics sources and will leverage state-of-the-art
commercial development and production for defense microelectronic solutions.
Space Technology
Space technologies include space flight, Space communication and other technologies
needed to maintain space operations. With rising threats and increasing dependence on
space-based systems, the Department's space strategy must shift away from exquisite
satellites to a more robust and proliferated architecture. Novel space technologies are
necessary to enable resilient cross-domain operations. The space strategy must
incorporate technologies that enhance the Department's adaptive and reconfigurable
capabilities in space situational awareness, space control, communication path diversity,
on-orbit processing, and autonomy.
Renewable Energy Generation and Storage
Renewable energy generation and storage includes solar wind, bio-based and geothermal
technologies, advanced energy storage, electronic engines, and power grid integration.
Renewable energy generation and storage promises to decrease warfighter vulnerability
and deliver new operational capabilities for the Department. From more efficient batteries
to diversifying energy sources and reduced fuel transportation risks, renewable energy
generation and storage will add resilience and flexibility in a contested logistics
environment.
Advanced Computing and Software
Advanced computing and software technologies include supercomputing, cloud
computing, data storage, computing architectures, and data processing. Software is
ubiquitous throughout the Department, but the speed at which software develops outpaces
the Department's ability to stay up to date. The Department must rapidly modernize its
legacy software systems with resilient, affordable, and assured new software that has
been designed, developed, and tested using processes that establish confidence in its
performance. The Department must migrate to a Development-Security-Operations
(DevSecOps) approach in its software development and evolve to a model of continuous
development, continuous test, and continuous delivery. The Department must leverage
modular open system architecture approaches to isolate hardware from software and
enable rapid upgrades to secure processors.
Human-Machine Interfaces
Human-Machine Interface refers to technologies related to human-machine teaming and
augmented and virtual reality. Rapid advancements in this technology will have a
multitude of benefits for our service members. Highly immersive realistic training
environments provide real-time feedback to enhance warfighter performance. Intuitive interactive human-machine interfaces enable rapid mission planning and mission
command by providing a common operational picture to geographically distributed
operations.
3. Defense-Specific Areas
Directed Energy
Directed Energy Weapons utilize lasers, high power microwaves, and high energy
particle beams to produce precision disruption, damage, or destruction of military targets
at range. Directed energy systems will allow the Department to counter a wide variety of
current and emerging threats with rapid responses and engagement at the speed of light.
High-power lasers and high-power microwave technologies both offer new ways to
counter diverse sets of threats.
Hypersonics
Hypersonic systems fly within the atmosphere for significant portions of their flight at or
above 5 times the speed of sound, or approximately 3700 miles per hour. Hypersonics
dramatically shorten the timeline to strike a target and increase unpredictability. While
strategic competitors are pursuing and rapidly fielding advanced hypersonic missiles, the
DoD will develop leap-ahead and cost-effective technologies for our air, land, and sea
operational forces.
Integrated Sensing and Cyber
To provide advantage for the joint force in highly contested environments, the
Department must develop wideband sensors to operate at the intersection of cyber space,
electronic warfare, radar, and communications. Sensors must be able to counter advanced
threats and can no longer be stove-piped and single function. |
Answer questions using ONLY the provided context. Do NOT use the internet or any internal knowledge. Use markdown seldomly, and only use bold or italic, nothing else. | What are the requirements of OPM? | As part of the assessment, S. 4043 would require OPM to explain whether each agency met its telework
goals and, if not, the actions being taken to identify and eliminate barriers to meeting them. The annual
report would also discuss additional steps that are planned by agencies to ensure telework oversight and
quality control and increase the utilization rates of office building space owned or leased by the agencies.
S. 4043 also requires the Office of Management and Budget (OMB), in consultation with GSA and the
Federal Real Property Council, to develop benchmarks and guidance for executive agencies to use when
calculating building utilization rates. S. 4043 would then require each executive agency head to establish
(1) a system to track office building space utilization rates consistent with that OMB guidance and (2)
indicators that measure the effects of telework policy on the management of real and personal property,
among other things.
S. 4043 would also require OPM to establish data standards to aid telework reporting requirements and
for automated telework tracking within payroll systems used by agencies. S. 4043 would require OPM, in
turn, to create an online tool that makes the standardized and reported data publicly available and would
allow OPM to use the online tool to fulfill its annual reporting requirements. For a more detailed
discussion of the bill’s provisions on telework data standards, including office building utilization data,
see CRS Insight IN12352, Establishing Data Standards and Measuring Building Use: Select Provisions
of the Telework Transparency Act of 2024 (S. 4043). | What are the requirements of OPM?
As part of the assessment, S. 4043 would require OPM to explain whether each agency met its telework
goals and, if not, the actions being taken to identify and eliminate barriers to meeting them. The annual
report would also discuss additional steps that are planned by agencies to ensure telework oversight and
quality control and increase the utilization rates of office building space owned or leased by the agencies.
S. 4043 also requires the Office of Management and Budget (OMB), in consultation with GSA and the
Federal Real Property Council, to develop benchmarks and guidance for executive agencies to use when
calculating building utilization rates. S. 4043 would then require each executive agency head to establish
(1) a system to track office building space utilization rates consistent with that OMB guidance and (2)
indicators that measure the effects of telework policy on the management of real and personal property,
among other things.
S. 4043 would also require OPM to establish data standards to aid telework reporting requirements and
for automated telework tracking within payroll systems used by agencies. S. 4043 would require OPM, in
turn, to create an online tool that makes the standardized and reported data publicly available and would
allow OPM to use the online tool to fulfill its annual reporting requirements. For a more detailed
discussion of the bill’s provisions on telework data standards, including office building utilization data,
see CRS Insight IN12352, Establishing Data Standards and Measuring Building Use: Select Provisions
of the Telework Transparency Act of 2024 (S. 4043).
Answer questions using ONLY the provided context. Do NOT use the internet or any internal knowledge. Use markdown seldomly, and only use bold or italic, nothing else. |
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | I have a mid-year presentation coming up in 2 weeks about specific treatments for type 2 diabetes. I need you to compare Insulin Efsitora versus Degludec in Type 2 diabetes without previous insulin treatment. | Insulin Efsitora versus Degludec in Type 2 Diabetes without Previous Insulin Treatment
Authors: Carol Wysham, M.D., Harpreet S. Bajaj, M.D., M.P.H., Stefano Del Prato, M.D. https://orcid.org/0000-0002-5388-0270, Denise Reis Franco, M.D., Arihiro Kiyosue, M.D., Ph.D., Dominik Dahl, M.D., Chunmei Zhou, M.S., Molly C. Carr, M.D., Michael Case, M.S., and Livia Firmino Gonçalves, M.D., for the QWINT-2 Investigators*Author Info & Affiliations
Published September 10, 2024
Background
Insulin efsitora alfa (efsitora) is a new basal insulin designed for once-weekly administration. Data on safety and efficacy have been limited to small, phase 1 or phase 2 trials.
Methods
We conducted a 52-week, phase 3, parallel-design, open-label, treat-to-target trial involving adults with type 2 diabetes who had not previously received insulin. Participants were randomly assigned in a 1:1 ratio to receive efsitora or degludec. The primary end point was the change in the glycated hemoglobin level from baseline to week 52; we hypothesized that efsitora would be noninferior to degludec (noninferiority margin, 0.4 percentage points). Secondary and safety end points included the change in the glycated hemoglobin level in subgroups of participants using and not using glucagon-like peptide-1 (GLP-1) receptor agonists, the percentage of time that the glucose level was in the target range of 70 to 180 mg per deciliter in weeks 48 through 52, and hypoglycemic episodes.
Results
A total of 928 participants underwent randomization (466 to the efsitora group and 462 to the degludec group). The mean glycated hemoglobin level decreased from 8.21% at baseline to 6.97% at week 52 with efsitora (least-squares mean change, -1.26 percentage points) and from 8.24% to 7.05% with degludec (least-squares mean change, -1.17 percentage points) (estimated treatment difference, -0.09 percentage points; 95% confidence interval [CI], -0.22 to 0.04), findings that showed noninferiority. Efsitora was noninferior to degludec with respect to the change in the glycated hemoglobin level in participants using and not using GLP-1 receptor agonists. The percentage of time that the glucose level was within the target range was 64.3% with efsitora and 61.2% with degludec (estimated treatment difference, 3.1 percentage points; 95% CI, 0.1 to 6.1). The rate of combined clinically significant or severe hypoglycemia was 0.58 events per participant-year of exposure with efsitora and 0.45 events per participant-year of exposure with degludec (estimated rate ratio, 1.30; 95% CI, 0.94 to 1.78). No severe hypoglycemia was reported with efsitora; six episodes were reported with degludec. The incidence of adverse events was similar in the two groups.
Conclusions
In adults with type 2 diabetes who had not previously received insulin, once-weekly efsitora was noninferior to once-daily degludec in reducing glycated hemoglobin levels. (Funded by Eli Lilly; QWINT-2 ClinicalTrials.gov number, NCT05362058.)
This article was published on September 10, 2024, at NEJM.org.
A data sharing statement provided by the authors is available with the full text of this article at NEJM.org.
Supported by Eli Lilly.
Disclosure forms provided by the authors are available with the full text of this article at NEJM.org.
We thank all the trial participants, Juliana Bue-Valleskey (Eli Lilly) for clinical trial design and technical consultation, and Alastair Knights (Eli Lilly) for medical writing assistance with an earlier version of the manuscript.
Supplementary Material
Protocol (nejmoa2403953_protocol.pdf)
4.65 MB
Supplementary Appendix (nejmoa2403953_appendix.pdf)
1.32 MB
Disclosure Forms (nejmoa2403953_disclosures.pdf)
Download
1.15 MB
Data Sharing Statement (nejmoa2403953_data-sharing.pdf)
Download
72.16 KB | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
I have a mid-year presentation coming up in 2 weeks about specific treatments for type 2 diabetes. I need you to compare Insulin Efsitora versus Degludec in Type 2 diabetes without previous insulin treatment.
<TEXT>
Insulin Efsitora versus Degludec in Type 2 Diabetes without Previous Insulin Treatment
Authors: Carol Wysham, M.D., Harpreet S. Bajaj, M.D., M.P.H., Stefano Del Prato, M.D. https://orcid.org/0000-0002-5388-0270, Denise Reis Franco, M.D., Arihiro Kiyosue, M.D., Ph.D., Dominik Dahl, M.D., Chunmei Zhou, M.S., Molly C. Carr, M.D., Michael Case, M.S., and Livia Firmino Gonçalves, M.D., for the QWINT-2 Investigators*Author Info & Affiliations
Published September 10, 2024
Background
Insulin efsitora alfa (efsitora) is a new basal insulin designed for once-weekly administration. Data on safety and efficacy have been limited to small, phase 1 or phase 2 trials.
Methods
We conducted a 52-week, phase 3, parallel-design, open-label, treat-to-target trial involving adults with type 2 diabetes who had not previously received insulin. Participants were randomly assigned in a 1:1 ratio to receive efsitora or degludec. The primary end point was the change in the glycated hemoglobin level from baseline to week 52; we hypothesized that efsitora would be noninferior to degludec (noninferiority margin, 0.4 percentage points). Secondary and safety end points included the change in the glycated hemoglobin level in subgroups of participants using and not using glucagon-like peptide-1 (GLP-1) receptor agonists, the percentage of time that the glucose level was in the target range of 70 to 180 mg per deciliter in weeks 48 through 52, and hypoglycemic episodes.
Results
A total of 928 participants underwent randomization (466 to the efsitora group and 462 to the degludec group). The mean glycated hemoglobin level decreased from 8.21% at baseline to 6.97% at week 52 with efsitora (least-squares mean change, -1.26 percentage points) and from 8.24% to 7.05% with degludec (least-squares mean change, -1.17 percentage points) (estimated treatment difference, -0.09 percentage points; 95% confidence interval [CI], -0.22 to 0.04), findings that showed noninferiority. Efsitora was noninferior to degludec with respect to the change in the glycated hemoglobin level in participants using and not using GLP-1 receptor agonists. The percentage of time that the glucose level was within the target range was 64.3% with efsitora and 61.2% with degludec (estimated treatment difference, 3.1 percentage points; 95% CI, 0.1 to 6.1). The rate of combined clinically significant or severe hypoglycemia was 0.58 events per participant-year of exposure with efsitora and 0.45 events per participant-year of exposure with degludec (estimated rate ratio, 1.30; 95% CI, 0.94 to 1.78). No severe hypoglycemia was reported with efsitora; six episodes were reported with degludec. The incidence of adverse events was similar in the two groups.
Conclusions
In adults with type 2 diabetes who had not previously received insulin, once-weekly efsitora was noninferior to once-daily degludec in reducing glycated hemoglobin levels. (Funded by Eli Lilly; QWINT-2 ClinicalTrials.gov number, NCT05362058.)
This article was published on September 10, 2024, at NEJM.org.
A data sharing statement provided by the authors is available with the full text of this article at NEJM.org.
Supported by Eli Lilly.
Disclosure forms provided by the authors are available with the full text of this article at NEJM.org.
We thank all the trial participants, Juliana Bue-Valleskey (Eli Lilly) for clinical trial design and technical consultation, and Alastair Knights (Eli Lilly) for medical writing assistance with an earlier version of the manuscript.
Supplementary Material
Protocol (nejmoa2403953_protocol.pdf)
4.65 MB
Supplementary Appendix (nejmoa2403953_appendix.pdf)
1.32 MB
Disclosure Forms (nejmoa2403953_disclosures.pdf)
Download
1.15 MB
Data Sharing Statement (nejmoa2403953_data-sharing.pdf)
Download
72.16 KB
https://www.nejm.org/doi/full/10.1056/NEJMoa2403953 |
Using only the information included in the prompt/context block, answer the prompt in one paragraph or less. | How is the self employment tax rate distributed? | What Is The Self-Employment Tax Rate?
The self-employment tax rate is 15.3 percent, with 12.4 percent allocated to the Social Security system and the other 2.9
percent going to Medicare. If you worked as an employee of a company, your employer would pay half, which means only 6.2
percent would be taken out of your wages for Social Security and 1.45 percent for Medicare.
But as a self-employed individual, you are considered both the employer and the employee, which makes the entire selfemployment tax burden yours. You do, however, get a tax deduction for one-half of the self-employed taxes paid as an abovethe-line deduction to arrive at adjusted gross income. | What Is The Self-Employment Tax Rate?
The self-employment tax rate is 15.3 percent, with 12.4 percent allocated to the Social Security system and the other 2.9
percent going to Medicare. If you worked as an employee of a company, your employer would pay half, which means only 6.2
percent would be taken out of your wages for Social Security and 1.45 percent for Medicare.
But as a self-employed individual, you are considered both the employer and the employee, which makes the entire selfemployment tax burden yours. You do, however, get a tax deduction for one-half of the self-employed taxes paid as an abovethe-line deduction to arrive at adjusted gross income.
Using only the information included in the prompt/context block, answer the prompt in one paragraph or less.
How is the self employment tax rate distributed? |
Draw your answer from the text in this prompt and this prompt alone. Do not use outside information or external resources. | Summarize how athletes can be tested for doping in the Olympic Games Paris 2024. | The anti-doping rules for the Paris Games apply to “all Athletes entered in or preparing for the Olympic
Games Paris 2024 or who have otherwise been made subject to the authority of the IOC in connection
with the Olympic Games.” Additionally, a number of rules apply to other individuals, including coaches,
trainers, and operations staff for the Games.
Athletes seeking to qualify for and participating in the 2024 Olympic Games were subject to testing under
IOC’s anti-doping policy beginning in April 2024 and will continue to be so through the close of the
Games. IOC’s anti-doping rules are “in accordance” with the World Anti-Doping Code, which includes
technical and procedural rules for the administration of anti-doping programs, the prohibited substances
list, and standards for testing laboratories, among other things. In addition to testing and related
requirements for athletes, other participants are barred from supporting doping, evasion of testing, and
manipulation of test results.
As signatories to the Code, IOC and IPC are each responsible for ensuring that their events are conducted
in compliance with WADA’s anti-doping requirements. A principal component of that obligation is the
collection of biological samples from athletes that can be tested for banned substances. While IPC
administers its own testing program, IOC has delegated this responsibility to ITA for the 2024 Games.
ITA is an independent organization created in 2018 by IOC and WADA to manage testing programs for
international sports federations and major events.
IOC’s anti-doping rules require athletes to submit to testing based on WADA’s International Standard for
Testing and Investigations, which allows the collection of both blood and urine samples to test for
prohibited substances. According to the organizers, the anti-doping program for the Paris Games will
include over 1,000 workers and an “anti-doping control space” at each venue where ITA and AFLD will
collect samples. Following collection, AFLD will test samples.
| System Instruction: Draw your answer from the text in this prompt and this prompt alone. Do not use outside information or external resources.
Question: Summarize how athletes can be tested for doping in the Olympic Games Paris 2024.
Context: The anti-doping rules for the Paris Games apply to “all Athletes entered in or preparing for the Olympic
Games Paris 2024 or who have otherwise been made subject to the authority of the IOC in connection
with the Olympic Games.” Additionally, a number of rules apply to other individuals, including coaches,
trainers, and operations staff for the Games.
Athletes seeking to qualify for and participating in the 2024 Olympic Games were subject to testing under
IOC’s anti-doping policy beginning in April 2024 and will continue to be so through the close of the
Games. IOC’s anti-doping rules are “in accordance” with the World Anti-Doping Code, which includes
technical and procedural rules for the administration of anti-doping programs, the prohibited substances
list, and standards for testing laboratories, among other things. In addition to testing and related
requirements for athletes, other participants are barred from supporting doping, evasion of testing, and
manipulation of test results.
As signatories to the Code, IOC and IPC are each responsible for ensuring that their events are conducted
in compliance with WADA’s anti-doping requirements. A principal component of that obligation is the
collection of biological samples from athletes that can be tested for banned substances. While IPC
administers its own testing program, IOC has delegated this responsibility to ITA for the 2024 Games.
ITA is an independent organization created in 2018 by IOC and WADA to manage testing programs for
international sports federations and major events.
IOC’s anti-doping rules require athletes to submit to testing based on WADA’s International Standard for
Testing and Investigations, which allows the collection of both blood and urine samples to test for
prohibited substances. According to the organizers, the anti-doping program for the Paris Games will
include over 1,000 workers and an “anti-doping control space” at each venue where ITA and AFLD will
collect samples. Following collection, AFLD will test samples. |
Using only the information provided in the above context block, answer the following question: | Of the money the National Electric Vehicle Infrastructure Formula Program provides, $1 billion is distributed by which agency? | U.S. electric vehicle sales doubled between 2020 and 2021 and account for about 4% of all
vehicles sold. Infrastructure to charge those vehicles exists along a range, from 120 volt plugs in
many home garages to more expensive faster chargers with more than 400 volts. Market surveys have shown that consumers
are concerned about the lack of an extensive charging network across the country, as well as the related concerns that some
electric vehicles have a limited range before needing to be recharged. The IIJA grant programs were designed to address
those concerns along major U.S. highways. In addition, the IIJA directs FHWA to develop standards for charging
infrastructure funded by certain federally programs so charging is secure, provides a range of payment options, and meets
certain installation requirements.
The federal government has in the past provided limited financial support for installation of electric vehicle charging stations,
such as through the alternative fuel infrastructure tax credit—modified by the law commonly referred to as the Inflation
Reduction Act of 2022 (IRA, P.L. 117-169)—and the Congestion Mitigation Air Quality Improvement program. With just
over 50,000 charging stations in October 2022—and more than 130,000 ports for charging—electric vehicle charging
capacity is far below one estimate that 2.4 million charging stations that may be necessary in 2030 to sustain an electric
vehicle fleet of 26 million vehicles (an estimate from one group of what may be needed to support California and other
states’ zero-emission vehicle (ZEV) goals).
The two $7.5 billion grant programs established by IIJA are
• The National Electric Vehicle Infrastructure (NEVI) Formula Program, which is to provide $5 billion in
grants, with $1 billion distributed by FHWA in each of FY2022-FY2026. All states, the District of
Columbia, and Puerto Rico are eligible, and funds must be used for charging along the national highway
system and primarily along highways already designated as alternative fuel corridors. Under existing
FHWA guidelines, new charging stations should be spaced a maximum of 50 miles apart. A new FHWA
rule sets additional standards and requirements. In September 2022, all state plans were approved, opening
access to FY2022 and FY2023 NEVI funding.
• The Charging and Fueling Infrastructure (CFI) grant program, which is to provide $2.5 billion over five
years to strategically deploy alternative fuel infrastructure for vehicles powered by electricity and other
fuels. Half of the new funding is to be used along FHWA corridors earmarked for those fuels. The other
half is to be applied to uses in public building parking lots and in similar publicly accessible locations. CFI
grants differ from NEVI in two ways: (a) grants are to be subject to a competitive process, unlike the
formula-based NEVI; and (b) priority are to be given to applicants in rural areas, disadvantaged
communities, and areas with high rates of multi-unit housing.
Using only the information provided in the above context block, answer the following question:
Of the money the National Electric Vehicle Infrastructure Formula Program provides, $1 billion is distributed by which agency? | "U.S. electric vehicle sales doubled between 2020 and 2021 and account for about 4% of all
vehicles sold. Infrastructure to charge those vehicles exists along a range, from 120 volt plugs in
many home garages to more expensive faster chargers with more than 400 volts. Market surveys have shown that consumers
are concerned about the lack of an extensive charging network across the country, as well as the related concerns that some
electric vehicles have a limited range before needing to be recharged. The IIJA grant programs were designed to address
those concerns along major U.S. highways. In addition, the IIJA directs FHWA to develop standards for charging
infrastructure funded by certain federally programs so charging is secure, provides a range of payment options, and meets
certain installation requirements.
The federal government has in the past provided limited financial support for installation of electric vehicle charging stations,
such as through the alternative fuel infrastructure tax credit—modified by the law commonly referred to as the Inflation
Reduction Act of 2022 (IRA, P.L. 117-169)—and the Congestion Mitigation Air Quality Improvement program. With just
over 50,000 charging stations in October 2022—and more than 130,000 ports for charging—electric vehicle charging
capacity is far below one estimate that 2.4 million charging stations that may be necessary in 2030 to sustain an electric
vehicle fleet of 26 million vehicles (an estimate from one group of what may be needed to support California and other
states’ zero-emission vehicle (ZEV) goals).
The two $7.5 billion grant programs established by IIJA are
• The National Electric Vehicle Infrastructure (NEVI) Formula Program, which is to provide $5 billion in
grants, with $1 billion distributed by FHWA in each of FY2022-FY2026. All states, the District of
Columbia, and Puerto Rico are eligible, and funds must be used for charging along the national highway
system and primarily along highways already designated as alternative fuel corridors. Under existing
FHWA guidelines, new charging stations should be spaced a maximum of 50 miles apart. A new FHWA
rule sets additional standards and requirements. In September 2022, all state plans were approved, opening
access to FY2022 and FY2023 NEVI funding.
• The Charging and Fueling Infrastructure (CFI) grant program, which is to provide $2.5 billion over five
years to strategically deploy alternative fuel infrastructure for vehicles powered by electricity and other
fuels. Half of the new funding is to be used along FHWA corridors earmarked for those fuels. The other
half is to be applied to uses in public building parking lots and in similar publicly accessible locations. CFI
grants differ from NEVI in two ways: (a) grants are to be subject to a competitive process, unlike the
formula-based NEVI; and (b) priority are to be given to applicants in rural areas, disadvantaged
communities, and areas with high rates of multi-unit housing. "
Using only the information provided in the above context block, answer the following question:
Of the money the National Electric Vehicle Infrastructure Formula Program provides, $1 billion is distributed by which agency? |
"================
<TEXT PASSAGE>
=======
[context document]
================
<QUESTION>
=======
[user request]
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." | Macular degeneration runs in my family and I'm getting older so I've been thinking more about it. What are the risk factors? Also, what are some of the symptoms I should be looking for? Are there some tests I should ask my eye doctor about at my next eye exam? | Age-related macular degeneration is the most common cause of severe loss of eyesight among people 50 and older. Only the center of vision is affected with this disease. It is important to realize that people rarely go blind from it.
AMD affects the central vision, and with it, the ability to see fine details. In AMD, a part of the retina called the macula is damaged. In advanced stages, people lose their ability to drive, to see faces, and to read smaller print. In its early stages, AMD may have no signs or symptoms, so people may not suspect they have it.
Types of Age-Related Macular Degeneration and Causes
The two primary types of age-related macular degeneration have different causes:
Dry. This type is the most common. About 80% of those with AMD have the dry form. Its exact cause is unknown, although both genetic and environmental factors are thought to play a role. This happens as the light-sensitive cells in the macula slowly break down, generally one eye at a time. The loss of vision in this condition is usually slow and gradual. It is believed that the age-related damage of an important support membrane under the retina contributes to dry age-related macular degeneration.
Wet. Though this type is less common, it usually leads to more severe vision loss in patients than dry AMD. It is the most common cause of severe loss of vision. Wet AMD happens when abnormal blood vessels start to grow beneath the retina. They leak fluid and blood — hence the name wet AMD — and can create a large blind spot in the center of the visual field.
Risk Factors for Age-Related Macular Degeneration
There are several risk factors that can contribute to developing age-related macular degeneration, including:
Being 50 and older
Eating a diet high in saturated fat
Smoking
High blood pressure or hypertension
Age-Related Macular Degeneration Symptoms
The following are the most common symptoms of age-related macular degeneration. However, each individual may experience symptoms differently. Symptoms may include:
Blurry or fuzzy vision
Difficulty recognizing familiar faces
Straight lines appear wavy
A dark, empty area or blind spot appears in the center of vision
Loss of central vision, which is necessary for driving, reading, recognizing faces and performing close-up work
The presence of drusen, which are tiny yellow deposits in the retina, is one of the most common early signs of age-related macular degeneration. It may mean the eye is at risk for developing more severe age-related macular degeneration. These will be visible to your doctor during an eye exam.
The symptoms of age-related macular degeneration may look like other eye conditions. Speak with an eye care professional for diagnosis.
Research ShowsAI Used to Predict Disease Progressioneye
Johns Hopkins researchers used an artificial intelligence computer program and other data to predict the likelihood that a person’s disease could progress to the wet form of age-related macular degeneration.
Learn more
Age-Related Macular Degeneration Diagnosis
In addition to a complete medical history and eye exam, your eye doctor may do the following tests to diagnose age-related macular degeneration:
Visual acuity test. This common eye chart test measures vision ability at various distances.
Pupil dilation. The pupil is widened with eyedrops to allow a close-up examination of the eye’s retina.
Fluorescein angiography. Used to detect wet age-related macular degeneration, this diagnostic test involves a special dye injected into a vein in the arm. Pictures are then taken as the dye passes through the blood vessels in the retina, helping the doctor evaluate if the blood vessels are leaking and whether or not the leaking can be treated.
Amsler grid. Used to detect wet age-related macular degeneration, this test uses a checkerboardlike grid to determine if the straight lines in the pattern appear wavy or missing to the patient. Both indications may signal the possibility of age-related macular degeneration.
Amsler Grid
To use the Amsler grid, follow these steps:
Wearing any glasses you normally use to read, hold the grid 12 to 15 inches away from your face in good light.
Cover one eye.
Look directly at the center dot with your uncovered eye and keep your eye focused on it.
While looking directly at the center dot, notice in your side vision if all grid lines look straight or if any lines or areas look blurry, wavy, dark or blank.
Follow the same steps with the other eye.
If you notice any areas of the grid that appear darker, wavy, blank or blurry, contact your ophthalmologist right away.
Amsler Grid
Age-Related Macular Degeneration Treatment
Specific treatment for age-related macular degeneration will be determined by your doctor based on:
Your age, overall health and medical history
Extent and nature of the disease
Your tolerance for specific medications, procedures or low-vision therapies
Expectations for the course of the disease
Your opinion or preference
Currently, there is no treatment for dry age-related macular degeneration, though vision rehabilitation programs and low-vision devices can be used to build visual skills, develop new ways to perform daily living activities and adjust to living with age-related macular degeneration.
The main treatment for wet AMD is the injection of medications called anti-VEGF agents. VEGF stands for vascular endothelial growth factor. A high level of VEGF in the eye is linked to the formation of the abnormal blood vessels that cause much of the damage in wet AMD. Anti-VEGF agents are used to combat the disease process and reduce the damaging effects of these leaky abnormal blood vessels. They are also able to effectively stabilize vision in many patients.
In some patients, anti-VEGF injections actually improve the level of visual acuity. Anti-VEGF medications are administered by injecting them directly into the affected eye. Although this sounds daunting, the procedure is done with a very fine needle and under the cover of numbing (anesthetic) eyedrops, so patients are usually very comfortable. Anti-VEGF treatment is usually administered regularly over time, requiring multiple injections to maintain the treatment effect, and your retinal physician will discuss the best treatment schedule for you. In selected patients, other treatments, such as laser therapy, can be used, if necessary.
Complications of Age-Related Macular Degeneration
Age-related macular degeneration can result in severe loss of central vision but rarely causes blindness. It can, however, make it difficult to read, drive or perform other daily activities that require fine central vision. In AMD, the health of the peripheral retina is unaffected, so patients can rest assured that their peripheral (side) vision, and their ability to walk around without bumping into things, is usually preserved. | "================
<TEXT PASSAGE>
=======
Age-related macular degeneration is the most common cause of severe loss of eyesight among people 50 and older. Only the center of vision is affected with this disease. It is important to realize that people rarely go blind from it.
AMD affects the central vision, and with it, the ability to see fine details. In AMD, a part of the retina called the macula is damaged. In advanced stages, people lose their ability to drive, to see faces, and to read smaller print. In its early stages, AMD may have no signs or symptoms, so people may not suspect they have it.
Types of Age-Related Macular Degeneration and Causes
The two primary types of age-related macular degeneration have different causes:
Dry. This type is the most common. About 80% of those with AMD have the dry form. Its exact cause is unknown, although both genetic and environmental factors are thought to play a role. This happens as the light-sensitive cells in the macula slowly break down, generally one eye at a time. The loss of vision in this condition is usually slow and gradual. It is believed that the age-related damage of an important support membrane under the retina contributes to dry age-related macular degeneration.
Wet. Though this type is less common, it usually leads to more severe vision loss in patients than dry AMD. It is the most common cause of severe loss of vision. Wet AMD happens when abnormal blood vessels start to grow beneath the retina. They leak fluid and blood — hence the name wet AMD — and can create a large blind spot in the center of the visual field.
Risk Factors for Age-Related Macular Degeneration
There are several risk factors that can contribute to developing age-related macular degeneration, including:
Being 50 and older
Eating a diet high in saturated fat
Smoking
High blood pressure or hypertension
Age-Related Macular Degeneration Symptoms
The following are the most common symptoms of age-related macular degeneration. However, each individual may experience symptoms differently. Symptoms may include:
Blurry or fuzzy vision
Difficulty recognizing familiar faces
Straight lines appear wavy
A dark, empty area or blind spot appears in the center of vision
Loss of central vision, which is necessary for driving, reading, recognizing faces and performing close-up work
The presence of drusen, which are tiny yellow deposits in the retina, is one of the most common early signs of age-related macular degeneration. It may mean the eye is at risk for developing more severe age-related macular degeneration. These will be visible to your doctor during an eye exam.
The symptoms of age-related macular degeneration may look like other eye conditions. Speak with an eye care professional for diagnosis.
Research ShowsAI Used to Predict Disease Progressioneye
Johns Hopkins researchers used an artificial intelligence computer program and other data to predict the likelihood that a person’s disease could progress to the wet form of age-related macular degeneration.
Learn more
Age-Related Macular Degeneration Diagnosis
In addition to a complete medical history and eye exam, your eye doctor may do the following tests to diagnose age-related macular degeneration:
Visual acuity test. This common eye chart test measures vision ability at various distances.
Pupil dilation. The pupil is widened with eyedrops to allow a close-up examination of the eye’s retina.
Fluorescein angiography. Used to detect wet age-related macular degeneration, this diagnostic test involves a special dye injected into a vein in the arm. Pictures are then taken as the dye passes through the blood vessels in the retina, helping the doctor evaluate if the blood vessels are leaking and whether or not the leaking can be treated.
Amsler grid. Used to detect wet age-related macular degeneration, this test uses a checkerboardlike grid to determine if the straight lines in the pattern appear wavy or missing to the patient. Both indications may signal the possibility of age-related macular degeneration.
Amsler Grid
To use the Amsler grid, follow these steps:
Wearing any glasses you normally use to read, hold the grid 12 to 15 inches away from your face in good light.
Cover one eye.
Look directly at the center dot with your uncovered eye and keep your eye focused on it.
While looking directly at the center dot, notice in your side vision if all grid lines look straight or if any lines or areas look blurry, wavy, dark or blank.
Follow the same steps with the other eye.
If you notice any areas of the grid that appear darker, wavy, blank or blurry, contact your ophthalmologist right away.
Amsler Grid
Age-Related Macular Degeneration Treatment
Specific treatment for age-related macular degeneration will be determined by your doctor based on:
Your age, overall health and medical history
Extent and nature of the disease
Your tolerance for specific medications, procedures or low-vision therapies
Expectations for the course of the disease
Your opinion or preference
Currently, there is no treatment for dry age-related macular degeneration, though vision rehabilitation programs and low-vision devices can be used to build visual skills, develop new ways to perform daily living activities and adjust to living with age-related macular degeneration.
The main treatment for wet AMD is the injection of medications called anti-VEGF agents. VEGF stands for vascular endothelial growth factor. A high level of VEGF in the eye is linked to the formation of the abnormal blood vessels that cause much of the damage in wet AMD. Anti-VEGF agents are used to combat the disease process and reduce the damaging effects of these leaky abnormal blood vessels. They are also able to effectively stabilize vision in many patients.
In some patients, anti-VEGF injections actually improve the level of visual acuity. Anti-VEGF medications are administered by injecting them directly into the affected eye. Although this sounds daunting, the procedure is done with a very fine needle and under the cover of numbing (anesthetic) eyedrops, so patients are usually very comfortable. Anti-VEGF treatment is usually administered regularly over time, requiring multiple injections to maintain the treatment effect, and your retinal physician will discuss the best treatment schedule for you. In selected patients, other treatments, such as laser therapy, can be used, if necessary.
Complications of Age-Related Macular Degeneration
Age-related macular degeneration can result in severe loss of central vision but rarely causes blindness. It can, however, make it difficult to read, drive or perform other daily activities that require fine central vision. In AMD, the health of the peripheral retina is unaffected, so patients can rest assured that their peripheral (side) vision, and their ability to walk around without bumping into things, is usually preserved.
https://www.hopkinsmedicine.org/health/conditions-and-diseases/agerelated-macular-degeneration-amd
================
<QUESTION>
=======
Macular degeneration runs in my family and I'm getting older so I've been thinking more about it. What are the risk factors? Also, what are some of the symptoms I should be looking for? Are there some tests I should ask my eye doctor about at my next eye exam?
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | I thought that understanding how computers work was difficult until I learned that there's something actually harder: Quantum computing. Concepts of qubits (quantum dots, superconducting qubits, photons, etc) and the key principles sound more like they belong to physics. What are these concepts? Explain them in less than 500 words. | Four key principles of quantum mechanics
Understanding quantum computing requires understanding these four key principles of quantum mechanics:
Superposition: Superposition is the state in which a quantum particle or system can represent not just one possibility, but a combination of multiple possibilities.
Entanglement: Entanglement is the process in which multiple quantum particles become correlated more strongly than regular probability allows.
Decoherence: Decoherence is the process in which quantum particles and systems can decay, collapse or change, converting into single states measurable by classical physics.
Interference: Interference is the phenomenon in which entangled quantum states can interact and produce more and less likely probabilities.
Qubits
While classical computers rely on binary bits (zeros and ones) to store and process data, quantum computers can encode even more data at once using quantum bits, or qubits, in superposition.
A qubit can behave like a bit and store either a zero or a one, but it can also be a weighted combination of zero and one at the same time. When combined, qubits in superposition can scale exponentially. Two qubits can store four bits of information, three can store eight, and four can store twelve.
However, each qubit can only output a single bit of information at the end of the computation. Quantum algorithms work by storing and manipulating information in a way inaccessible to classical computers, which can provide speedups for certain problems.
As silicon chip and superconductor development has scaled over the years, it is distinctly possible that we might soon reach a material limit on the computing power of classical computers. Quantum computing could provide a path forward for certain important problems.
With leading institutions such as IBM, Microsoft, Google and Amazon joining eager startups such as Rigetti and Ionq in investing heavily in this exciting new technology, quantum computing is estimated to become a USD 1.3 trillion industry by 2035.1
Secure your enterprise for the quantum era
Quantum computers are scaling rapidly. Soon, they will be powerful enough to solve previously unsolvable problems. This opportunity comes with a global challenge: quantum computers will be able to break some of the most widely-used security protocols in the world.
Learn more
How do quantum computers work?
A primary difference between classical and quantum computers is that quantum computers use qubits instead of bits to store exponentially more information. While quantum computing does use binary code, qubits process information differently from classical computers. But what are qubits and where do they come from?
What are qubits?
Generally, qubits are created by manipulating and measuring quantum particles (the smallest known building blocks of the physical universe), such as photons, electrons, trapped ions and atoms. Qubits can also engineer systems that behave like a quantum particle, as in superconducting circuits.
To manipulate such particles, qubits must be kept extremely cold to minimize noise and prevent them from providing inaccurate results or errors resulting from unintended decoherence.
There are many different types of qubits used in quantum computing today, with some better suited for different types of tasks.
A few of the more common types of qubits in use are as follows:
Superconducting qubits: Made from superconducting materials operating at extremely low temperatures, these qubits are favored for their speed in performing computations and fine-tuned control.
Trapped ion qubits: Trapped ion particles can also be used as qubits and are noted for long coherence times and high-fidelity measurements.
Quantum dots: Quantum dots are small semiconductors that capture a single electron and use it as a qubit, offering promising potential for scalability and compatibility with existing semiconductor technology.
Photons: Photons are individual light particles used to send quantum information across long distances through optical fiber cables and are currently being used in quantum communication and quantum cryptography.
Neutral atoms: Commonly occurring neutral atoms charged with lasers are well suited for scaling and performing operations.
When processing a complex problem, such as factoring large numbers, classical bits become bound up by holding large quantities of information. Quantum bits behave differently. Because qubits can hold a superposition, a quantum computer that uses qubits can approach the problem in ways different from classical computers.
As a helpful analogy for understanding how quantum computers use qubits to solve complicated problems, imagine you are standing in the center of a complicated maze. To escape the maze, a traditional computer would have to “brute force” the problem, trying every possible combination of paths to find the exit. This kind of computer would use bits to explore new paths and remember which ones are dead ends.
Comparatively, a quantum computer might derive a bird’s-eye view of the maze, testing multiple paths simultaneously and using quantum interference to reveal the correct solution. However, qubits don't test multiple paths at once; instead, quantum computers measure the probability amplitudes of qubits to determine an outcome. These amplitudes function like waves, overlapping and interfering with each other. When asynchronous waves overlap, it effectively eliminates possible solutions to complex problems, and the realized coherent wave or waves present the solution.
Key principles of quantum computing
When discussing quantum computers, it is important to understand that quantum mechanics is not like traditional physics. The behaviors of quantum particles often appear to be bizarre, counterintuitive or even impossible. Yet the laws of quantum mechanics dictate the order of the natural world.
Describing the behaviors of quantum particles presents a unique challenge. Most common-sense paradigms for the natural world lack the vocabulary to communicate the surprising behaviors of quantum particles.
To understand quantum computing, it is important to understand a few key terms:
Superposition
Entanglement
Decoherence
Interference.
Superposition
A qubit itself isn't very useful. But it can place the quantum information it holds into a state of superposition, which represents a combination of all possible configurations of the qubit. Groups of qubits in superposition can create complex, multidimensional computational spaces. Complex problems can be represented in new ways in these spaces.
This superposition of qubits gives quantum computers their inherent parallelism, allowing them to process many inputs simultaneously.
Entanglement
Entanglement is the ability of qubits to correlate their state with other qubits. Entangled systems are so intrinsically linked that when quantum processors measure a single entangled qubit, they can immediately determine information about other qubits in the entangled system.
When a quantum system is measured, its state collapses from a superposition of possibilities into a binary state, which can be registered like binary code as either a zero or a one.
Decoherence
Decoherence is the process in which a system in a quantum state collapses into a nonquantum state. It can be intentionally triggered by measuring a quantum system or by other environmental factors (sometimes these factors trigger it unintentionally). Decoherence allows quantum computers to provide measurements and interact with classical computers.
Interference
An environment of entangled qubits placed into a state of collective superposition structures information in a way that looks like waves, with amplitudes associated with each outcome. These amplitudes become the probabilities of the outcomes of a measurement of the system. These waves can build on each other when many of them peak at a particular outcome, or cancel each other out when peaks and troughs interact. Amplifying a probability or canceling out others are both forms of interference. | [question]
I thought that understanding how computers work was difficult until I learned that there's something actually harder: Quantum computing. Concepts of qubits (quantum dots, superconducting qubits, photons, etc) and the key principles sound more like they belong to physics. What are these concepts? Explain them in less than 500 words.
=====================
[text]
Four key principles of quantum mechanics
Understanding quantum computing requires understanding these four key principles of quantum mechanics:
Superposition: Superposition is the state in which a quantum particle or system can represent not just one possibility, but a combination of multiple possibilities.
Entanglement: Entanglement is the process in which multiple quantum particles become correlated more strongly than regular probability allows.
Decoherence: Decoherence is the process in which quantum particles and systems can decay, collapse or change, converting into single states measurable by classical physics.
Interference: Interference is the phenomenon in which entangled quantum states can interact and produce more and less likely probabilities.
Qubits
While classical computers rely on binary bits (zeros and ones) to store and process data, quantum computers can encode even more data at once using quantum bits, or qubits, in superposition.
A qubit can behave like a bit and store either a zero or a one, but it can also be a weighted combination of zero and one at the same time. When combined, qubits in superposition can scale exponentially. Two qubits can store four bits of information, three can store eight, and four can store twelve.
However, each qubit can only output a single bit of information at the end of the computation. Quantum algorithms work by storing and manipulating information in a way inaccessible to classical computers, which can provide speedups for certain problems.
As silicon chip and superconductor development has scaled over the years, it is distinctly possible that we might soon reach a material limit on the computing power of classical computers. Quantum computing could provide a path forward for certain important problems.
With leading institutions such as IBM, Microsoft, Google and Amazon joining eager startups such as Rigetti and Ionq in investing heavily in this exciting new technology, quantum computing is estimated to become a USD 1.3 trillion industry by 2035.1
Secure your enterprise for the quantum era
Quantum computers are scaling rapidly. Soon, they will be powerful enough to solve previously unsolvable problems. This opportunity comes with a global challenge: quantum computers will be able to break some of the most widely-used security protocols in the world.
Learn more
How do quantum computers work?
A primary difference between classical and quantum computers is that quantum computers use qubits instead of bits to store exponentially more information. While quantum computing does use binary code, qubits process information differently from classical computers. But what are qubits and where do they come from?
What are qubits?
Generally, qubits are created by manipulating and measuring quantum particles (the smallest known building blocks of the physical universe), such as photons, electrons, trapped ions and atoms. Qubits can also engineer systems that behave like a quantum particle, as in superconducting circuits.
To manipulate such particles, qubits must be kept extremely cold to minimize noise and prevent them from providing inaccurate results or errors resulting from unintended decoherence.
There are many different types of qubits used in quantum computing today, with some better suited for different types of tasks.
A few of the more common types of qubits in use are as follows:
Superconducting qubits: Made from superconducting materials operating at extremely low temperatures, these qubits are favored for their speed in performing computations and fine-tuned control.
Trapped ion qubits: Trapped ion particles can also be used as qubits and are noted for long coherence times and high-fidelity measurements.
Quantum dots: Quantum dots are small semiconductors that capture a single electron and use it as a qubit, offering promising potential for scalability and compatibility with existing semiconductor technology.
Photons: Photons are individual light particles used to send quantum information across long distances through optical fiber cables and are currently being used in quantum communication and quantum cryptography.
Neutral atoms: Commonly occurring neutral atoms charged with lasers are well suited for scaling and performing operations.
When processing a complex problem, such as factoring large numbers, classical bits become bound up by holding large quantities of information. Quantum bits behave differently. Because qubits can hold a superposition, a quantum computer that uses qubits can approach the problem in ways different from classical computers.
As a helpful analogy for understanding how quantum computers use qubits to solve complicated problems, imagine you are standing in the center of a complicated maze. To escape the maze, a traditional computer would have to “brute force” the problem, trying every possible combination of paths to find the exit. This kind of computer would use bits to explore new paths and remember which ones are dead ends.
Comparatively, a quantum computer might derive a bird’s-eye view of the maze, testing multiple paths simultaneously and using quantum interference to reveal the correct solution. However, qubits don't test multiple paths at once; instead, quantum computers measure the probability amplitudes of qubits to determine an outcome. These amplitudes function like waves, overlapping and interfering with each other. When asynchronous waves overlap, it effectively eliminates possible solutions to complex problems, and the realized coherent wave or waves present the solution.
Key principles of quantum computing
When discussing quantum computers, it is important to understand that quantum mechanics is not like traditional physics. The behaviors of quantum particles often appear to be bizarre, counterintuitive or even impossible. Yet the laws of quantum mechanics dictate the order of the natural world.
Describing the behaviors of quantum particles presents a unique challenge. Most common-sense paradigms for the natural world lack the vocabulary to communicate the surprising behaviors of quantum particles.
To understand quantum computing, it is important to understand a few key terms:
Superposition
Entanglement
Decoherence
Interference.
Superposition
A qubit itself isn't very useful. But it can place the quantum information it holds into a state of superposition, which represents a combination of all possible configurations of the qubit. Groups of qubits in superposition can create complex, multidimensional computational spaces. Complex problems can be represented in new ways in these spaces.
This superposition of qubits gives quantum computers their inherent parallelism, allowing them to process many inputs simultaneously.
Entanglement
Entanglement is the ability of qubits to correlate their state with other qubits. Entangled systems are so intrinsically linked that when quantum processors measure a single entangled qubit, they can immediately determine information about other qubits in the entangled system.
When a quantum system is measured, its state collapses from a superposition of possibilities into a binary state, which can be registered like binary code as either a zero or a one.
Decoherence
Decoherence is the process in which a system in a quantum state collapses into a nonquantum state. It can be intentionally triggered by measuring a quantum system or by other environmental factors (sometimes these factors trigger it unintentionally). Decoherence allows quantum computers to provide measurements and interact with classical computers.
Interference
An environment of entangled qubits placed into a state of collective superposition structures information in a way that looks like waves, with amplitudes associated with each outcome. These amplitudes become the probabilities of the outcomes of a measurement of the system. These waves can build on each other when many of them peak at a particular outcome, or cancel each other out when peaks and troughs interact. Amplifying a probability or canceling out others are both forms of interference.
https://www.ibm.com/topics/quantum-computing#:~:text=Schneider%2C%20Ian%20Smalley-,What%20is%20quantum%20computing%3F,the%20most%20powerful%20classical%20computers.
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
You must answer the following questions using only the information found in the provided context block. Do not under any circumstances, use external sources or prior knowledge. Answer in complete sentences but no longer than 250 words. | What are the differences between Points 6 and 7 on the rights of sick children in health care? | 6. Every child and young person has a right to information, in a form that is understandable to them. Children and young people have a right to information that they can understand about their health and healthcare. This includes information about the choice of health care services available. Special attention and some creativity are often necessary to ensure that children have the freedom to seek, receive and impart information and ideas, not only orally but also through other means of the child’s or young person’s choice, such as play and art. Ensuring that the language and format used are appropriate to the child’s or young person’s abilities and level of understanding is essential, as is ensuring that they have understood the information given and had every opportunity to participate in the conversations about their health and care. This right to information includes the right of tamariki and rangatahi to have access to information in Te Reo Māori and for those from culturally and linguistically diverse backgrounds to have access to information in their own language. It is crucial that health professionals talk directly to children and young people, as well as to their families/whānau, even if the child or young person may seem unable to comprehend. Health professionals and families/whānau should be as open as possible with children and young people about their health and healthcare. Like all patients, children and young people are entitled to know what is going to happen to them before a procedure occurs and to be given honest information about their condition and treatment outcomes, and to be helped to select and practice strategies for coping. Giving children and young people timely and accurate information means that they can retain a sense of control about their healthcare, particularly in hospital. Advance preparation for hospitalisation, healthcare procedures or impending surgery provides children and young people with a sense of mastery over the healthcare environment and helps them to cope more effectively with potentially stressful situations.
7. Every child and young person has a right to participate in decision-making and, as appropriate to their capabilities, to make decisions about their care. Children and young people have a right to be involved in decision-making about their healthcare, to the greatest extent possible in line with their capacities for understanding. The right to be involved in making decisions also includes the right to be involved in decisions about the use, return or disposal of any bodily parts or substances removed, changed or added in the course of health care. Children and young people should be offered healthcare choices wherever possible. Further, they are always entitled to a second opinion. Whenever a child or young person has questions and ideas about their healthcare, these should be heard. If their views cannot be acted on, they are entitled to an explanation. In order for children and young people to participate in decision-making, the health professionals caring for them ought to be available, trained and committed to communicating with children and young people. Effective communication is critical in healthcare, as children, young people and their families/whānau require appropriate information in order to provide informed consent to treatment. A child or young person needs to be able to talk with the staff caring for him or her, to understand who the staff are and what they do, and to question them about his or her condition and treatment. Participation can include both verbal and nonverbal communication by children and young people with health professionals. It should also include opportunities to communicate through play, art and other media of the child’s or young person’s choice. Health professionals need to pay attention to ensure that appropriate responses are made to the nonverbal cues and communication by children and young people who use this as their main form (for example, infants, very young children and those with disabilities). The right to participation extends beyond the right of every individual child and young person to participate in his or her care. It includes encouraging and supporting children and young people as groups to be involved in consultation on the development, implementation and evaluation of the services, policies and strategies that have an impact on them. Informed consent is to be sought from children, young people and their families/whānau before they are involved in teaching or research. Also, those who do agree to participate must have the opportunity to withdraw at any time without having to give a reason, even if they consent initially. The decision not to participate in teaching or research must not alter access to treatment. Ethical oversight by a Human Research Ethics Committee of all research projects conducted in child healthcare services is part of protecting the children and young people involved.
| You must answer the following questions using only the information found in the provided context block. Do not under any circumstances, use external sources or prior knowledge. Answer in complete sentences but no longer than 250 words. You may include Te Reo Māori in your answer.
What are the differences between points 6 and 7 on the rights of sick children in health care?
6. Every child and young person has a right to information, in a form that is understandable to them. Children and young people have a right to information that they can understand about their health and healthcare. This includes information about the choice of health care services available. Special attention and some creativity are often necessary to ensure that children have the freedom to seek, receive and impart information and ideas, not only orally but also through other means of the child’s or young person’s choice, such as play and art. Ensuring that the language and format used are appropriate to the child’s or young person’s abilities and level of understanding is essential, as is ensuring that they have understood the information given and had every opportunity to participate in the conversations about their health and care. This right to information includes the right of tamariki and rangatahi to have access to information in Te Reo Māori and for those from culturally and linguistically diverse backgrounds to have access to information in their own language. It is crucial that health professionals talk directly to children and young people, as well as to their families/whānau, even if the child or young person may seem unable to comprehend. Health professionals and families/whānau should be as open as possible with children and young people about their health and healthcare. Like all patients, children and young people are entitled to know what is going to happen to them before a procedure occurs and to be given honest information about their condition and treatment outcomes, and to be helped to select and practice strategies for coping. Giving children and young people timely and accurate information means that they can retain a sense of control about their healthcare, particularly in hospital. Advance preparation for hospitalisation, healthcare procedures or impending surgery provides children and young people with a sense of mastery over the healthcare environment and helps them to cope more effectively with potentially stressful situations.
7. Every child and young person has a right to participate in decision-making and, as appropriate to their capabilities, to make decisions about their care. Children and young people have a right to be involved in decision-making about their healthcare, to the greatest extent possible in line with their capacities for understanding. The right to be involved in making decisions also includes the right to be involved in decisions about the use, return or disposal of any bodily parts or substances removed, changed or added in the course of health care. Children and young people should be offered healthcare choices wherever possible. Further, they are always entitled to a second opinion. Whenever a child or young person has questions and ideas about their healthcare, these should be heard. If their views cannot be acted on, they are entitled to an explanation. In order for children and young people to participate in decision-making, the health professionals caring for them ought to be available, trained and committed to communicating with children and young people. Effective communication is critical in healthcare, as children, young people and their families/whānau require appropriate information in order to provide informed consent to treatment. A child or young person needs to be able to talk with the staff caring for him or her, to understand who the staff are and what they do, and to question them about his or her condition and treatment. Participation can include both verbal and nonverbal communication by children and young people with health professionals. It should also include opportunities to communicate through play, art and other media of the child’s or young person’s choice. Health professionals need to pay attention to ensure that appropriate responses are made to the nonverbal cues and communication by children and young people who use this as their main form (for example, infants, very young children and those with disabilities). The right to participation extends beyond the right of every individual child and young person to participate in his or her care. It includes encouraging and supporting children and young people as groups to be involved in consultation on the development, implementation and evaluation of the services, policies and strategies that have an impact on them. Informed consent is to be sought from children, young people and their families/whānau before they are involved in teaching or research. Also, those who do agree to participate must have the opportunity to withdraw at any time without having to give a reason, even if they consent initially. The decision not to participate in teaching or research must not alter access to treatment. Ethical oversight by a Human Research Ethics Committee of all research projects conducted in child healthcare services is part of protecting the children and young people involved. |
Only use information from the context given to you to answer the question. Do not use any outside sources. Do not be overly formal or robotic in your response. | What names does the company operate under? | SERVICE AND MAINTENANCE
Replacement Parts
• Water Filtration - Replacement water filtration disks can be purchased through
your local retailer.
• Decanters – You can usually purchase a replacement decanter from the store where
you purchased your coffeemaker. If you are unable to find a replacement, please call
1-800-667-8623 in Canada for information on where you can find a store that carries
replacement decanters.
Repairs
If your coffeemaker requires service, do not return it to the store where you purchased
it. All repairs and replacements must be made by Sunbeam or an authorized Sunbeam
Service Center. Please call us at the following toll-free telephone number to find the
location of the nearest authorized service center:
Canada 1-800-667-8623
You may also visit our website at www.sunbeam.ca for a list of service centers.
To assist us in serving you, please have the coffeemaker model number and date
of purchase available when you call. The model number is stamped on the bottom
metal plate of the coffeemaker.
We welcome your questions, comments or suggestions.
In all your communications, please include your complete name, address and
telephone number and a description of the problem.
Visit our website at www.sunbeam.ca and discover the secret to brewing the
perfect cup of coffee. You will also find a rich blend of gourmet recipes, entertaining
tips and the latest information on SUNBEAM TM products.
WARRANTY INFORMATION
1-YEAR LIMITED WARRANTY
Sunbeam Products, Inc. doing business as Jarden Consumer Solutions or if in Canada, Sunbeam
Corporation (Canada) Limited doing business as Jarden Consumer Solutions (collectively “JCS”)
warrants that for a period of one year from the date of purchase, this product will be free from
defects in material and workmanship. JCS, at its option, will repair or replace this product or any
component of the product found to be defective during the warranty period. Replacement will
be made with a new or remanufactured product or component. If the product is no longer
available, replacement may be made with a similar product of equal or greater value. This is your
exclusive warranty. Do NOT attempt to repair or adjust any electrical or mechanical functions on
this product. Doing so will void this warranty.
This warranty is valid for the original retail purchaser from the date of initial retail purchase and is
not transferable. Keep the original sales receipt. Proof of purchase is required to obtain warranty
performance. JCS dealers, service centers, or retail stores selling JCS products do not have the
right to alter, modify or any way change the terms and conditions of this warranty.
This warranty does not cover normal wear of parts or damage resulting from any of the following:
negligent use or misuse of the product, use on improper voltage or current, use contrary to the
operating instructions, disassembly, repair or alteration by anyone other than JCS or an authorized
JCS service center. Further, the warranty does not cover: Acts of God, such as fire, flood,
hurricanes and tornadoes.
What are the limits on JCS’s Liability?
JCS shall not be liable for any incidental or consequential damages caused by the breach of
any express, implied or statutory warranty or condition.
Except to the extent prohibited by applicable law, any implied warranty or condition of
merchantability or fitness for a particular purpose is limited in duration to the duration of the
above warranty.
JCS disclaims all other warranties, conditions or representations, express, implied, statutory or
otherwise.
JCS shall not be liable for any damages of any kind resulting from the purchase, use or misuse of,
or inability to use the product including incidental, special, consequential or similar damages or
loss of profits, or for any breach of contract, fundamental or otherwise, or for any claim brought
against purchaser by any other party.
Some provinces, states or jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages or limitations on how long an implied warranty lasts, so the above
limitations or exclusion may not apply to you.
This warranty gives you specific legal rights, and you may also have other rights that vary from
province to province, state to state or jurisdiction to jurisdiction.
How to Obtain Warranty Service
In the U.S.A.
If you have any question regarding this warranty or would like to obtain warranty service,
please call 1-800-458-8407 and a convenient service center address will be provided to you.
In Canada
If you have any question regarding this warranty or would like to obtain warranty service,
please call 1-800-667-8623 and a convenient service center address will be provided to you.
In the U.S.A., this warranty is offered by Sunbeam Products, Inc. doing business as Jarden
Consumer Solutions located in Boca Raton, Florida 33431. In Canada, this warranty is offered by
Sunbeam Corporation (Canada) Limited doing business as Jarden Consumer Solutions, located
at 20 B Hereford Street, Brampton, Ontario L6Y 0M1. If you have any other problem or claim in
connection with this product, please write our Consumer Service Department.
PLEASE DO NOT RETURN THIS PRODUCT TO ANY OF THESE ADDRESSES
OR TO THE PLACE OF PURCHASE. | SERVICE AND MAINTENANCE
Replacement Parts
• Water Filtration - Replacement water filtration disks can be purchased through
your local retailer.
• Decanters – You can usually purchase a replacement decanter from the store where
you purchased your coffeemaker. If you are unable to find a replacement, please call
1-800-667-8623 in Canada for information on where you can find a store that carries
replacement decanters.
Repairs
If your coffeemaker requires service, do not return it to the store where you purchased
it. All repairs and replacements must be made by Sunbeam or an authorized Sunbeam
Service Center. Please call us at the following toll-free telephone number to find the
location of the nearest authorized service center:
Canada 1-800-667-8623
You may also visit our website at www.sunbeam.ca for a list of service centers.
To assist us in serving you, please have the coffeemaker model number and date
of purchase available when you call. The model number is stamped on the bottom
metal plate of the coffeemaker.
We welcome your questions, comments or suggestions.
In all your communications, please include your complete name, address and
telephone number and a description of the problem.
Visit our website at www.sunbeam.ca and discover the secret to brewing the
perfect cup of coffee. You will also find a rich blend of gourmet recipes, entertaining
tips and the latest information on SUNBEAM TM products.
WARRANTY INFORMATION
1-YEAR LIMITED WARRANTY
Sunbeam Products, Inc. doing business as Jarden Consumer Solutions or if in Canada, Sunbeam
Corporation (Canada) Limited doing business as Jarden Consumer Solutions (collectively “JCS”)
warrants that for a period of one year from the date of purchase, this product will be free from
defects in material and workmanship. JCS, at its option, will repair or replace this product or any
component of the product found to be defective during the warranty period. Replacement will
be made with a new or remanufactured product or component. If the product is no longer
available, replacement may be made with a similar product of equal or greater value. This is your
exclusive warranty. Do NOT attempt to repair or adjust any electrical or mechanical functions on
this product. Doing so will void this warranty.
This warranty is valid for the original retail purchaser from the date of initial retail purchase and is
not transferable. Keep the original sales receipt. Proof of purchase is required to obtain warranty
performance. JCS dealers, service centers, or retail stores selling JCS products do not have the
right to alter, modify or any way change the terms and conditions of this warranty.
This warranty does not cover normal wear of parts or damage resulting from any of the following:
negligent use or misuse of the product, use on improper voltage or current, use contrary to the
operating instructions, disassembly, repair or alteration by anyone other than JCS or an authorized
JCS service center. Further, the warranty does not cover: Acts of God, such as fire, flood,
hurricanes and tornadoes.
What are the limits on JCS’s Liability?
JCS shall not be liable for any incidental or consequential damages caused by the breach of
any express, implied or statutory warranty or condition.
Except to the extent prohibited by applicable law, any implied warranty or condition of
merchantability or fitness for a particular purpose is limited in duration to the duration of the
above warranty.
JCS disclaims all other warranties, conditions or representations, express, implied, statutory or
otherwise.
JCS shall not be liable for any damages of any kind resulting from the purchase, use or misuse of,
or inability to use the product including incidental, special, consequential or similar damages or
loss of profits, or for any breach of contract, fundamental or otherwise, or for any claim brought
against purchaser by any other party.
Some provinces, states or jurisdictions do not allow the exclusion or limitation of incidental or
consequential damages or limitations on how long an implied warranty lasts, so the above
limitations or exclusion may not apply to you.
This warranty gives you specific legal rights, and you may also have other rights that vary from
province to province, state to state or jurisdiction to jurisdiction.
How to Obtain Warranty Service
In the U.S.A.
If you have any question regarding this warranty or would like to obtain warranty service,
please call 1-800-458-8407 and a convenient service center address will be provided to you.
In Canada
If you have any question regarding this warranty or would like to obtain warranty service,
please call 1-800-667-8623 and a convenient service center address will be provided to you.
In the U.S.A., this warranty is offered by Sunbeam Products, Inc. doing business as Jarden
Consumer Solutions located in Boca Raton, Florida 33431. In Canada, this warranty is offered by
Sunbeam Corporation (Canada) Limited doing business as Jarden Consumer Solutions, located
at 20 B Hereford Street, Brampton, Ontario L6Y 0M1. If you have any other problem or claim in
connection with this product, please write our Consumer Service Department.
PLEASE DO NOT RETURN THIS PRODUCT TO ANY OF THESE ADDRESSES
OR TO THE PLACE OF PURCHASE.
Only use information from the context given to you to answer the question. Do not use any outside sources. Do not be overly formal or robotic in your response.
What names does the company operate under? |
Only refer to the document for your answer. Do not use outside sources. | Based on the article when might copyrighted works to train AI programs be considered a fair use? | Congressional Research Service
**Generative Artificial Intelligence and Copyright Law**
September 29, 2023
Copyright in Works Created with Generative AI
A recent lawsuit challenged the human-authorship requirement in the context of works purportedly “authored” by AI. In June 2022, Stephen Thaler sued the Copyright Office for denying his application to register a visual artwork that he claims was authored “autonomously” by an AI program called the Creativity Machine. Dr. Thaler argued that human authorship is not required by the Copyright Act. On August 18, 2023, a federal district court granted summary judgment in favor of the Copyright Office. The court held that “human authorship is an essential part of a valid copyright claim,” reasoning that only human authors need copyright as an incentive to create works. Dr. Thaler has stated that he plans to appeal the decision.
Assuming a copyrightable work requires a human author, works created by humans using generative AI could still be entitled to copyright protection, depending on the nature of human involvement in the creative process. However, a recent copyright proceeding and subsequent Copyright Registration Guidance indicate that the Copyright Office is unlikely to find the requisite human authorship where an AI program generates works in response to text prompts. In September 2022, Kris Kashtanova registered a copyright for a graphic novel illustrated with images that Midjourney generated in response to text inputs. In October 2022, the Copyright Office initiated cancellation proceedings, noting that Kashtanova had not disclosed the use of AI. Kashtanova responded by arguing that the images were made via “a creative, iterative process.” On February 21, 2023, the Copyright Office determined that the images were not copyrightable, deciding that Midjourney, rather than Kashtanova, authored the “visual material.” In March 2023, the Copyright Office released guidance stating that, when AI “determines the expressive elements of its output, the generated material is not the product of human authorship.”
Some commentators assert that some AI-generated works should receive copyright protection, arguing that AI programs are like other tools that human beings have used to create copyrighted works. For example, the Supreme Court has held since the 1884 case Burrow-Giles Lithographic Co. v. Sarony that photographs can be entitled to copyright protection where the photographer makes decisions regarding creative elements such as composition, arrangement, and lighting. Generative AI programs might be seen as a new tool analogous to the camera, as Kashtanova argued.
Other commentators and the Copyright Office dispute the photography analogy and question whether AI users exercise sufficient creative control for AI to be considered merely a tool. In Kashtanova’s case, the Copyright Office reasoned that Midjourney was not “a tool that [] Kashtanova controlled and guided to reach [their] desired image” because it “generates images in an unpredictable way.” The Copyright Office instead compared the AI user to “a client who hires an artist” and gives that artist only “general directions.” The office’s March 2023 guidance similarly claims that “users do not exercise ultimate creative control over how [generative AI] systems interpret prompts and generate materials.” One of Kashtanova’s lawyers, on the other hand, argues that the Copyright Act does not require such exacting creative control, noting that certain photographs and modern art incorporate a degree of happenstance.
Some commentators argue that the Copyright Act’s distinction between copyrightable “works” and noncopyrightable “ideas” supplies another reason that copyright should not protect AI-generated works. One law professor has suggested that the human user who enters a text prompt into an AI program—for instance, asking DALL-E “to produce a painting of hedgehogs having a tea party on the beach”—has “contributed nothing more than an idea” to the finished work. According to this argument, the output image lacks a human author and cannot be copyrighted. While the Copyright Office’s actions indicate that it may be challenging to obtain copyright protection for AI-generated works, the issue remains unsettled. Applicants may file suit in U.S. district court to challenge the Copyright Office’s final decisions to refuse to register a copyright (as Dr. Thaler did), and it remains to be seen whether federal courts will agree with all of the office’s decisions. While the Copyright Office notes that courts sometimes give weight to the office’s experience and expertise in this field, courts will not necessarily adopt the office’s interpretations of the Copyright Act.
In addition, the Copyright Office’s guidance accepts that works “containing” AI-generated material may be copyrighted under some circumstances, such as “sufficiently creative” human arrangements or modifications of AI-generated material or works that combine AI-generated and human-authored material. The office states that the author may only claim copyright protection “for their own contributions” to such works, and they must identify and disclaim AI-generated parts of the work if they apply to register their copyright. In September 2023, for instance, the Copyright Office Review Board affirmed the office’s refusal to register a copyright for an artwork that was generated by Midjourney and then modified in various ways by the applicant, since the applicant did not disclaim the AI-generated material.
Who Owns the Copyright to Generative AI Outputs?
Assuming some AI-created works may be eligible for copyright protection, who owns that copyright? In general, the Copyright Act vests ownership “initially in the author or authors of the work.” Given the lack of judicial or Copyright Office decisions recognizing copyright in AI-created works to date, however, no clear rule has emerged identifying who the “author or authors” of these works could be. Returning to the photography analogy, the AI’s creator might be compared to the camera maker, while the AI user who prompts the creation of a specific work might be compared to the photographer who uses that camera to capture a specific image. On this view, the user would be considered the author and, therefore, the initial copyright owner. The creative choices involved in coding and training the AI, on the other hand, might give an AI’s creator a stronger claim to some form of authorship than the manufacturer of a camera.
Does the AI Training Process Infringe Copyright in Other Works?
AI are “trained” to create literary, visual, and other artistic works by exposing the program to large amounts of data, which may include text, images, and other works downloaded from the internet. This training process involves making digital copies of existing works. As the U.S. Patent and Trademark Office has described, this process “will almost by definition involve the reproduction of entire works or substantial portions thereof.” OpenAI, for example, acknowledges that its programs are trained on “large, publicly available datasets that include copyrighted works” and that this process “involves first making copies of the data to be analyzed” (although it now offers an option to remove images from training future image generation models). Creating such copies without permission may infringe the copyright holders’ exclusive right to make reproductions of their work.
AI companies may argue that their training processes constitute fair use and are therefore noninfringing. Whether or not copying constitutes fair use depends on four statutory factors under 17 U.S.C. § 107: 1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; 2. the nature of the copyrighted work; 3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and 4. the effect of the use upon the potential market for or value of the copyrighted work.
Some stakeholders argue that the use of copyrighted works to train AI programs should be considered a fair use under these factors. Regarding the first factor, OpenAI argues its purpose is “transformative” as opposed to “expressive” because the training process creates “a useful generative AI system.” OpenAI also contends that the third factor supports fair use because the copies are not made available to the public but are used only to train the program. For support, OpenAI cites The Authors Guild, Inc. v. Google, Inc., in which the U.S. Court of Appeals for the Second Circuit held that Google’s copying of entire books to create a searchable database that displayed excerpts of those books constituted fair use.
Regarding the fourth fair use factor, some generative AI applications have raised concern that training AI programs on copyrighted works allows them to generate similar works that compete with the originals. For example, an AI-generated song called “Heart on My Sleeve,” made to sound like the artists Drake and The Weeknd, was heard millions of times on streaming services. Universal Music Group, which has deals with both artists, argues that AI companies violate copyright by using these artists’ songs in training data. OpenAI states that its visual art program DALL-E 3 “is designed to decline requests that ask for an image in the style of a living artist.”
Plaintiffs have filed multiple lawsuits claiming the training process for AI programs infringed their copyrights in written and visual works. These include lawsuits by the Authors Guild and authors Paul Tremblay, Michael Chabon, Sarah Silverman, and others against OpenAI; separate lawsuits by Michael Chabon, Sarah Silverman, and others against Meta Platforms; proposed class action lawsuits against Alphabet Inc. and Stability AI and Midjourney; and a lawsuit by Getty Images against Stability AI. The Getty Images lawsuit, for instance, alleges that “Stability AI has copied at least 12 million copyrighted images from Getty Images’ websites . . . in order to train its Stable Diffusion model.” This lawsuit appears to dispute any characterization of fair use, arguing that Stable Diffusion is a commercial product, weighing against fair use under the first statutory factor, and that the program undermines the market for the original works, weighing against fair use under the fourth factor.
In September 2023, a U.S. district court ruled that a jury trial would be needed to determine whether it was fair use for an AI company to copy case summaries from Westlaw, a legal research platform, to train an AI program to quote pertinent passages from legal opinions in response to questions from a user. The court found that, while the defendant’s use was “undoubtedly commercial,” a jury would need to resolve factual disputes concerning whether the use was “transformative” (factor 1), to what extent the nature of the plaintiff’s work favored fair use (factor 2), whether the defendant copied more than needed to train the AI program (factor 3), and whether the AI program would constitute a “market substitute” for Westlaw (factor 4). While the AI program at issue might not be considered “generative” AI, the same kinds of facts might be relevant to a court’s fair-use analysis of making copies to train generative AI models.
| <Context>
=======
Congressional Research Service
**Generative Artificial Intelligence and Copyright Law**
September 29, 2023
Copyright in Works Created with Generative AI
A recent lawsuit challenged the human-authorship requirement in the context of works purportedly “authored” by AI. In June 2022, Stephen Thaler sued the Copyright Office for denying his application to register a visual artwork that he claims was authored “autonomously” by an AI program called the Creativity Machine. Dr. Thaler argued that human authorship is not required by the Copyright Act. On August 18, 2023, a federal district court granted summary judgment in favor of the Copyright Office. The court held that “human authorship is an essential part of a valid copyright claim,” reasoning that only human authors need copyright as an incentive to create works. Dr. Thaler has stated that he plans to appeal the decision.
Assuming a copyrightable work requires a human author, works created by humans using generative AI could still be entitled to copyright protection, depending on the nature of human involvement in the creative process. However, a recent copyright proceeding and subsequent Copyright Registration Guidance indicate that the Copyright Office is unlikely to find the requisite human authorship where an AI program generates works in response to text prompts. In September 2022, Kris Kashtanova registered a copyright for a graphic novel illustrated with images that Midjourney generated in response to text inputs. In October 2022, the Copyright Office initiated cancellation proceedings, noting that Kashtanova had not disclosed the use of AI. Kashtanova responded by arguing that the images were made via “a creative, iterative process.” On February 21, 2023, the Copyright Office determined that the images were not copyrightable, deciding that Midjourney, rather than Kashtanova, authored the “visual material.” In March 2023, the Copyright Office released guidance stating that, when AI “determines the expressive elements of its output, the generated material is not the product of human authorship.”
Some commentators assert that some AI-generated works should receive copyright protection, arguing that AI programs are like other tools that human beings have used to create copyrighted works. For example, the Supreme Court has held since the 1884 case Burrow-Giles Lithographic Co. v. Sarony that photographs can be entitled to copyright protection where the photographer makes decisions regarding creative elements such as composition, arrangement, and lighting. Generative AI programs might be seen as a new tool analogous to the camera, as Kashtanova argued.
Other commentators and the Copyright Office dispute the photography analogy and question whether AI users exercise sufficient creative control for AI to be considered merely a tool. In Kashtanova’s case, the Copyright Office reasoned that Midjourney was not “a tool that [] Kashtanova controlled and guided to reach [their] desired image” because it “generates images in an unpredictable way.” The Copyright Office instead compared the AI user to “a client who hires an artist” and gives that artist only “general directions.” The office’s March 2023 guidance similarly claims that “users do not exercise ultimate creative control over how [generative AI] systems interpret prompts and generate materials.” One of Kashtanova’s lawyers, on the other hand, argues that the Copyright Act does not require such exacting creative control, noting that certain photographs and modern art incorporate a degree of happenstance.
Some commentators argue that the Copyright Act’s distinction between copyrightable “works” and noncopyrightable “ideas” supplies another reason that copyright should not protect AI-generated works. One law professor has suggested that the human user who enters a text prompt into an AI program—for instance, asking DALL-E “to produce a painting of hedgehogs having a tea party on the beach”—has “contributed nothing more than an idea” to the finished work. According to this argument, the output image lacks a human author and cannot be copyrighted. While the Copyright Office’s actions indicate that it may be challenging to obtain copyright protection for AI-generated works, the issue remains unsettled. Applicants may file suit in U.S. district court to challenge the Copyright Office’s final decisions to refuse to register a copyright (as Dr. Thaler did), and it remains to be seen whether federal courts will agree with all of the office’s decisions. While the Copyright Office notes that courts sometimes give weight to the office’s experience and expertise in this field, courts will not necessarily adopt the office’s interpretations of the Copyright Act.
In addition, the Copyright Office’s guidance accepts that works “containing” AI-generated material may be copyrighted under some circumstances, such as “sufficiently creative” human arrangements or modifications of AI-generated material or works that combine AI-generated and human-authored material. The office states that the author may only claim copyright protection “for their own contributions” to such works, and they must identify and disclaim AI-generated parts of the work if they apply to register their copyright. In September 2023, for instance, the Copyright Office Review Board affirmed the office’s refusal to register a copyright for an artwork that was generated by Midjourney and then modified in various ways by the applicant, since the applicant did not disclaim the AI-generated material.
Who Owns the Copyright to Generative AI Outputs?
Assuming some AI-created works may be eligible for copyright protection, who owns that copyright? In general, the Copyright Act vests ownership “initially in the author or authors of the work.” Given the lack of judicial or Copyright Office decisions recognizing copyright in AI-created works to date, however, no clear rule has emerged identifying who the “author or authors” of these works could be. Returning to the photography analogy, the AI’s creator might be compared to the camera maker, while the AI user who prompts the creation of a specific work might be compared to the photographer who uses that camera to capture a specific image. On this view, the user would be considered the author and, therefore, the initial copyright owner. The creative choices involved in coding and training the AI, on the other hand, might give an AI’s creator a stronger claim to some form of authorship than the manufacturer of a camera.
Does the AI Training Process Infringe Copyright in Other Works?
AI are “trained” to create literary, visual, and other artistic works by exposing the program to large amounts of data, which may include text, images, and other works downloaded from the internet. This training process involves making digital copies of existing works. As the U.S. Patent and Trademark Office has described, this process “will almost by definition involve the reproduction of entire works or substantial portions thereof.” OpenAI, for example, acknowledges that its programs are trained on “large, publicly available datasets that include copyrighted works” and that this process “involves first making copies of the data to be analyzed” (although it now offers an option to remove images from training future image generation models). Creating such copies without permission may infringe the copyright holders’ exclusive right to make reproductions of their work.
AI companies may argue that their training processes constitute fair use and are therefore noninfringing. Whether or not copying constitutes fair use depends on four statutory factors under 17 U.S.C. § 107: 1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; 2. the nature of the copyrighted work; 3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and 4. the effect of the use upon the potential market for or value of the copyrighted work.
Some stakeholders argue that the use of copyrighted works to train AI programs should be considered a fair use under these factors. Regarding the first factor, OpenAI argues its purpose is “transformative” as opposed to “expressive” because the training process creates “a useful generative AI system.” OpenAI also contends that the third factor supports fair use because the copies are not made available to the public but are used only to train the program. For support, OpenAI cites The Authors Guild, Inc. v. Google, Inc., in which the U.S. Court of Appeals for the Second Circuit held that Google’s copying of entire books to create a searchable database that displayed excerpts of those books constituted fair use.
Regarding the fourth fair use factor, some generative AI applications have raised concern that training AI programs on copyrighted works allows them to generate similar works that compete with the originals. For example, an AI-generated song called “Heart on My Sleeve,” made to sound like the artists Drake and The Weeknd, was heard millions of times on streaming services. Universal Music Group, which has deals with both artists, argues that AI companies violate copyright by using these artists’ songs in training data. OpenAI states that its visual art program DALL-E 3 “is designed to decline requests that ask for an image in the style of a living artist.”
Plaintiffs have filed multiple lawsuits claiming the training process for AI programs infringed their copyrights in written and visual works. These include lawsuits by the Authors Guild and authors Paul Tremblay, Michael Chabon, Sarah Silverman, and others against OpenAI; separate lawsuits by Michael Chabon, Sarah Silverman, and others against Meta Platforms; proposed class action lawsuits against Alphabet Inc. and Stability AI and Midjourney; and a lawsuit by Getty Images against Stability AI. The Getty Images lawsuit, for instance, alleges that “Stability AI has copied at least 12 million copyrighted images from Getty Images’ websites . . . in order to train its Stable Diffusion model.” This lawsuit appears to dispute any characterization of fair use, arguing that Stable Diffusion is a commercial product, weighing against fair use under the first statutory factor, and that the program undermines the market for the original works, weighing against fair use under the fourth factor.
In September 2023, a U.S. district court ruled that a jury trial would be needed to determine whether it was fair use for an AI company to copy case summaries from Westlaw, a legal research platform, to train an AI program to quote pertinent passages from legal opinions in response to questions from a user. The court found that, while the defendant’s use was “undoubtedly commercial,” a jury would need to resolve factual disputes concerning whether the use was “transformative” (factor 1), to what extent the nature of the plaintiff’s work favored fair use (factor 2), whether the defendant copied more than needed to train the AI program (factor 3), and whether the AI program would constitute a “market substitute” for Westlaw (factor 4). While the AI program at issue might not be considered “generative” AI, the same kinds of facts might be relevant to a court’s fair-use analysis of making copies to train generative AI models.
<Task>
=======
Only refer to the document for your answer. Do not use outside sources.
<Query>
=======
Based on the article when might copyrighted works to train AI programs be considered a fair use? |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | I'm researching intermittent fasting as a weight loss program as a whole for my nutritional class. To help me prepare for the paper I want you summarize the study findings in the paragraph, but don't mention the limitations. Then provide a list of benefits for intermittent fasting and any potential negatives based on intermittent fasting or the study. | Intermittent fastingTrusted Source involves only eating during specific time intervals. There are many ways to do intermittent fasting; it can mean not eating on certain days or only eating food at certain times of the day. Some people seek to use IF to lose weight or maintain a healthy weight.
Some evidenceTrusted Source suggests that intermittent fasting can help people lose body fat and may help reduce the risk of type 2 diabetes and cardiovascular disease.
However, researchers are still working to understand the potential dangers of intermittent fasting and how to weigh these risks against the potential benefits. Overall, this is an area where there is a need for more data.
Beata Rydyger, a registered nutritionist based in Los Angeles, CA, and clinical nutritional advisor to Zen Nutrients, who was not involved in the study, pointed out a challenge with studying dietary behaviors to Medical News Today:
“Generally speaking, diets are more difficult to study because dietary changes don’t have an immediate effect on health. Most study participants find it hard to track what they eat, and few can adhere to a diet for long enough for beneficial effects to be measured.”
Reducing calories for weight loss
This study included 547 participants recruited from three different health systems.
Researchers collected information on participants through electronic health records and the use of a specialized mobile app called Daily24. Participants could record when they ate, meal size, the times they went to sleep, and when they woke up.
For each meal recorded, participants estimated meal size as less than 500 calories (small), 500-1,000 calories (medium), or greater than 1,000 calories (large).
Study author Dr. Wendy Bennett, elaborated on their research methods to MNT:
“We designed an app to collect ‘timing of eating,’ and when participants input the timing, we also asked them the size of the meal (small, med, or large). Participants from 3 health systems used the app for 6 months. We linked the app data with survey data with electronic health records.”
Dr. Bennett said that they then analyzed the link between eating intervals, including the participants’ total eating window, the time between their wake-up and bedtime, and the time between their last meal and bedtime, with changes in their weight over about six years.
The researchers found that the timing from the first meal of the day to the last meal of the day was not associated with changes in weight. However, they did find that eating more frequent, larger meals were associated with weight gain.
Data on intermittent fasting is still emerging, so no one study offers all the proof that the method is effective or ineffective. This particular study also had several limitations to consider.
First, researchers could only analyze data from study participants who downloaded and used the Daily24 app. This exclusion may have impacted the study population and results.
They only recruited participants from three health systems, meaning the results cannot necessarily be generalized. Almost 78% of participants were women and white, indicating the need for more diverse future studies.
The study also had a relatively short follow-up time, leading to fewer weight measurements and declines in measurement precision. Researchers were also unable to measure participants’ intentions to lose weight before their enrollment in the study.
The way researchers measured eating periods could not evaluate more complex fasting methods. Data also relied on participants’ self-reporting, and food was not standardized or assessed for quality.
“This study did not specifically assess patterns like intermittent fasting. We also did not assess diet quality for the meals reported in the app,” Dr. Bennett noted to MNT.
“Randomized controlled trials that adjust for caloric intake are needed to further test the role of timing of eating in weight gain prevention and also weight loss,” she added. | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
I'm researching intermittent fasting as a weight loss program as a whole for my nutritional class. To help me prepare for the paper I want you summarize the study findings in the paragraph, but don't mention the limitations. Then provide a list of benefits for intermittent fasting and any potential negatives based on intermittent fasting or the study.
{passage 0}
==========
Intermittent fastingTrusted Source involves only eating during specific time intervals. There are many ways to do intermittent fasting; it can mean not eating on certain days or only eating food at certain times of the day. Some people seek to use IF to lose weight or maintain a healthy weight.
Some evidenceTrusted Source suggests that intermittent fasting can help people lose body fat and may help reduce the risk of type 2 diabetes and cardiovascular disease.
However, researchers are still working to understand the potential dangers of intermittent fasting and how to weigh these risks against the potential benefits. Overall, this is an area where there is a need for more data.
Beata Rydyger, a registered nutritionist based in Los Angeles, CA, and clinical nutritional advisor to Zen Nutrients, who was not involved in the study, pointed out a challenge with studying dietary behaviors to Medical News Today:
“Generally speaking, diets are more difficult to study because dietary changes don’t have an immediate effect on health. Most study participants find it hard to track what they eat, and few can adhere to a diet for long enough for beneficial effects to be measured.”
Reducing calories for weight loss
This study included 547 participants recruited from three different health systems.
Researchers collected information on participants through electronic health records and the use of a specialized mobile app called Daily24. Participants could record when they ate, meal size, the times they went to sleep, and when they woke up.
For each meal recorded, participants estimated meal size as less than 500 calories (small), 500-1,000 calories (medium), or greater than 1,000 calories (large).
Study author Dr. Wendy Bennett, elaborated on their research methods to MNT:
“We designed an app to collect ‘timing of eating,’ and when participants input the timing, we also asked them the size of the meal (small, med, or large). Participants from 3 health systems used the app for 6 months. We linked the app data with survey data with electronic health records.”
Dr. Bennett said that they then analyzed the link between eating intervals, including the participants’ total eating window, the time between their wake-up and bedtime, and the time between their last meal and bedtime, with changes in their weight over about six years.
The researchers found that the timing from the first meal of the day to the last meal of the day was not associated with changes in weight. However, they did find that eating more frequent, larger meals were associated with weight gain.
Data on intermittent fasting is still emerging, so no one study offers all the proof that the method is effective or ineffective. This particular study also had several limitations to consider.
First, researchers could only analyze data from study participants who downloaded and used the Daily24 app. This exclusion may have impacted the study population and results.
They only recruited participants from three health systems, meaning the results cannot necessarily be generalized. Almost 78% of participants were women and white, indicating the need for more diverse future studies.
The study also had a relatively short follow-up time, leading to fewer weight measurements and declines in measurement precision. Researchers were also unable to measure participants’ intentions to lose weight before their enrollment in the study.
The way researchers measured eating periods could not evaluate more complex fasting methods. Data also relied on participants’ self-reporting, and food was not standardized or assessed for quality.
“This study did not specifically assess patterns like intermittent fasting. We also did not assess diet quality for the meals reported in the app,” Dr. Bennett noted to MNT.
“Randomized controlled trials that adjust for caloric intake are needed to further test the role of timing of eating in weight gain prevention and also weight loss,” she added.
https://www.medicalnewstoday.com/articles/weight-loss-study-finds-calorie-restriction-more-effective-than-intermittent-fasting#Intermittent-fasting:-risks-and-benefits |
Only refer to the attached document in providing your response. | How can credit unions attract younger millennial clients? | FORBES > MONEY
3 Ways Credit Unions Can Attract More Millennial Members
Crissi Cole | Forbes Councils Member
Forbes Finance Council
COUNCIL POST | Membership (Fee-Based)
Feb 28, 2024, 07:00am EST
Crissi Cole is the CEO and founder of Penny Finance,
an online financial mentorship community for women.
GETTY
Where did you open your first bank account? For me, it
was with a credit union. I still remember how proud I
felt, strolling through the glass doors of Washington
Trust in Rhode Island to deposit hard-earned tips from
a summer job. That first check turned into a lifelong
relationship, and years later, I took out my first
mortgage with that same credit union.
Yet, for others around my age, this isn’t always the case.
Wooed by compelling adverts and household names, I
noticed that many of my friends housed their assets at
traditional financial institutions. In fact, only 14% of
Americans ages 25-34 are members of credit unions.
This is surprising when you consider credit unions’
advantages. For example, did you know that credit
unions are actually member-owned nonprofits?
According to Bankrate, this structure enables credit
unions to charge lower interest rates on loans and
higher yields on savings products. That’s why you may
have noticed competitive rates on a mortgage at your
local credit union, or better yields on share certificates
or savings accounts.
Between 2022 and 2045, baby boomers are projected to
hand down $72.6 trillion in assets to their heirs,
including Generation X and millennials. So, it’s more
important than ever for credit unions to appeal to
millennials.
How can credit unions attract and retain more
millennial members during the great wealth transfer? As
a fintech founder, a millennial and a credit union
member, here are three practical solutions that credit
unions can implement today.
Offer a competitive high-yield
savings account.
What’s a popular piece of financial advice I see
millennials passing on to their friends right now? Open
a high-yield savings account, like, yesterday. One of the
only upsides to high interest rates is the HYSA, and
millennials are taking note. As a credit union, one way
to attract millennial members—and to keep existing
members from moving their money out of your
MORE FROM FORBES ADVISOR
Best High-Yield Savings Accounts Of
2024
By Kevin Payne Contributor
Best 5% Interest Savings Accounts of
2024
By Cassidy Horton Contributor
ecosystem—is to offer a high-yield savings account with
competitive rates.
If your credit union can’t offer a HYSA right now, there
are strategic alternatives. For example, in 2022,
Farmers Insurance Group Federal Credit Union raised
the rate of its online savings account, eliminating its
tiers and minimum balance requirement, while keeping
the rate on its money market accounts static.
Tackle issues millennials care about.
Millennials hold 47% of student loan debt in the U.S.
Speaking from personal experience, this debt weighs
down heavily on us, limiting our upward mobility and
delaying experiences considered “rites of passage” for
previous generations, such as home ownership. Credit
unions can offer support to millennial members through
loan refinancing for better rates, but there’s the
opportunity to go further.
Giving members the tools to figure out their debt payoff
plan—in the form of online learning, planning and
calculators—can provide them the support they need to
get out of debt and, one day, into investing.
Provide next-gen financial planning.
Only one-sixth of credit unions in the U.S. offer financial
planning services, yet 85% of millennials and Gen Z seek
some form of behavioral coaching about their finances.
For credit unions, it’s challenging to scale an army of
advisors to serve this need. Plus, millennials often don’t
meet the asset minimum needed to hire an advisor.
That’s why it’s time to meet millennials where they are:
online.
Offering digital, nontraditional financial planning to
your millennial members is a way to be at the forefront
of money management. And if your credit union does
happen to offer wealth management, offering digital
planning solutions isn’t a conflict of interest; it’s an on-
ramp. Helping your members become financially
healthy today means they’re more likely to have
investable assets tomorrow, and if you have the data
infrastructure in place, you’ll be able to reroute them in
your ecosystem.
The information provided here is not investment, tax or
financial advice. You should consult with a licensed
professional for advice concerning your specific
situation.
Forbes Finance Council is an invitation-only
organization for executives in successful accounting,
financial planning and wealth management firms. Do I
qualify?
Follow me on Twitter or LinkedIn. Check
out my website.
Crissi Cole
Crissi Cole is the CEO and founder of Penny Finance, an online
nancial mentorship community for women. Read Crissi Cole's
full... Read More
ADVERTISEMENT
Editorial Standards Reprints & Permissions | Only refer to the attached document in providing your response.
How can credit unions attract younger millennial clients?
FORBES > MONEY
3 Ways Credit Unions Can Attract More Millennial Members
Crissi Cole | Forbes Councils Member
Forbes Finance Council
COUNCIL POST | Membership (Fee-Based)
Feb 28, 2024, 07:00am EST
Crissi Cole is the CEO and founder of Penny Finance,
an online financial mentorship community for women.
GETTY
Where did you open your first bank account? For me, it
was with a credit union. I still remember how proud I
felt, strolling through the glass doors of Washington
Trust in Rhode Island to deposit hard-earned tips from
a summer job. That first check turned into a lifelong
relationship, and years later, I took out my first
mortgage with that same credit union.
Yet, for others around my age, this isn’t always the case.
Wooed by compelling adverts and household names, I
noticed that many of my friends housed their assets at
traditional financial institutions. In fact, only 14% of
Americans ages 25-34 are members of credit unions.
This is surprising when you consider credit unions’
advantages. For example, did you know that credit
unions are actually member-owned nonprofits?
According to Bankrate, this structure enables credit
unions to charge lower interest rates on loans and
higher yields on savings products. That’s why you may
have noticed competitive rates on a mortgage at your
local credit union, or better yields on share certificates
or savings accounts.
Between 2022 and 2045, baby boomers are projected to
hand down $72.6 trillion in assets to their heirs,
including Generation X and millennials. So, it’s more
important than ever for credit unions to appeal to
millennials.
How can credit unions attract and retain more
millennial members during the great wealth transfer? As
a fintech founder, a millennial and a credit union
member, here are three practical solutions that credit
unions can implement today.
Offer a competitive high-yield
savings account.
What’s a popular piece of financial advice I see
millennials passing on to their friends right now? Open
a high-yield savings account, like, yesterday. One of the
only upsides to high interest rates is the HYSA, and
millennials are taking note. As a credit union, one way
to attract millennial members—and to keep existing
members from moving their money out of your
MORE FROM FORBES ADVISOR
Best High-Yield Savings Accounts Of
2024
By Kevin Payne Contributor
Best 5% Interest Savings Accounts of
2024
By Cassidy Horton Contributor
ecosystem—is to offer a high-yield savings account with
competitive rates.
If your credit union can’t offer a HYSA right now, there
are strategic alternatives. For example, in 2022,
Farmers Insurance Group Federal Credit Union raised
the rate of its online savings account, eliminating its
tiers and minimum balance requirement, while keeping
the rate on its money market accounts static.
Tackle issues millennials care about.
Millennials hold 47% of student loan debt in the U.S.
Speaking from personal experience, this debt weighs
down heavily on us, limiting our upward mobility and
delaying experiences considered “rites of passage” for
previous generations, such as home ownership. Credit
unions can offer support to millennial members through
loan refinancing for better rates, but there’s the
opportunity to go further.
Giving members the tools to figure out their debt payoff
plan—in the form of online learning, planning and
calculators—can provide them the support they need to
get out of debt and, one day, into investing.
Provide next-gen financial planning.
Only one-sixth of credit unions in the U.S. offer financial
planning services, yet 85% of millennials and Gen Z seek
some form of behavioral coaching about their finances.
For credit unions, it’s challenging to scale an army of
advisors to serve this need. Plus, millennials often don’t
meet the asset minimum needed to hire an advisor.
That’s why it’s time to meet millennials where they are:
online.
Offering digital, nontraditional financial planning to
your millennial members is a way to be at the forefront
of money management. And if your credit union does
happen to offer wealth management, offering digital
planning solutions isn’t a conflict of interest; it’s an on-
ramp. Helping your members become financially
healthy today means they’re more likely to have
investable assets tomorrow, and if you have the data
infrastructure in place, you’ll be able to reroute them in
your ecosystem.
The information provided here is not investment, tax or
financial advice. You should consult with a licensed
professional for advice concerning your specific
situation.
Forbes Finance Council is an invitation-only
organization for executives in successful accounting,
financial planning and wealth management firms. Do I
qualify?
Follow me on Twitter or LinkedIn. Check
out my website.
Crissi Cole
Crissi Cole is the CEO and founder of Penny Finance, an online
nancial mentorship community for women. Read Crissi Cole's
full... Read More
ADVERTISEMENT
Editorial Standards Reprints & Permissions |
For this task, you should answer questions only based on the information provided in the prompt. You are not allowed to use any internal information, prior knowledge, or external resources to answer questions. Do not exceed 250 words, and provide the answer in paragraph form. | What are the ideal features that could be added to an institutional data repository that would make them more appealing/helpful to researchers? | Scientists’ data practices
Participants across all the focus groups indicated having a DMP for at least one of their recent or current projects. Regarding data storage, some participants across four focus groups (atmosphere and earth science, chemistry, computer science, and neuroscience) used institutional repositories (IRs) for their data at some point within the data lifecycle, with five participants explicitly indicating use of IRs in their DMPs. The other popular choice discussed across four focus groups (atmospheric and earth science, computer science, ecology, and neuroscience) was proprietary cloud storage systems (e.g., DropBox, GitHub, and Google Drive). These users were concerned about file size limitations, costs, long-term preservation, data mining by the service providers, and the number of storage solutions becoming burdensome.
Desired repository features
Data traceability
Participants across four focus groups (atmosphere and earth science, chemistry, ecology, and neuroscience) mentioned wanting different kinds of information about how their data were being used to be tracked after data deposit in repositories. They wanted to know how many researchers view, cite, and publish based on the data they deposit. Additionally, participants wanted repositories to track any changes to their data post-deposit. For example, they suggested the creation of a path for updates to items in repositories after initial submission. They also wanted repositories to allow explicit versioning of their materials to clearly inform users of changes to materials over time. Relatedly, participants wanted repositories to provide notification systems for data depositors and users to know when new versions or derivative works based on their data become available as well as notifications for depositors about when their data has been viewed, cited, or included in a publication.
Metadata
Participants across three focus groups (atmospheric and earth science, chemistry, and neuroscience) discussed wanting high quality metadata within repositories. Some argued for automated metadata creation when uploading their data into repositories to save time and provide at least some level of description of their data (e.g., P1, P4, Chemistry). Within their own projects and in utilizing repositories, participants wanted help with metadata quality control issues. Participants within atmospheric and earth science who frequently created or interacted with complex files wanted expanded types of metadata (e.g., greater spatial metadata for geographic information system (GIS) data). Atmospheric and earth scientists, chemists, and neuroscientists wanted greater searchability and machine readability of data and entities within datasets housed in repositories, specifically to find a variable by multiple search parameters.
Data use restrictions
Participants across all five focus groups agreed that repositories need to clearly explain what a researcher can and cannot do with a dataset. For example, participants thought repositories should clearly state on every dataset whether researchers can: base new research on the data, publish based on the data, and use the data for business purposes. Participants stated current data restrictions can be confusing to those not acquainted with legal principles. For example, one data professional (P2, Chemistry) explained that researchers often mislabeled their datasets with ill-suited licenses. Participants commonly reported using Open Access or Creative Commons, but articulated the necessity of having the option for restrictive or proprietary licenses, although most had not used such licenses.
Some participants used embargoes and others never had. Most viewed embargoes as “a necessary evil,” provided that they are limited to approximately a few years after repository submission or until time of publication. Participants did not think it was fair to repository staff or potential data reusers to have any data embargoed in perpetuity.
Stable infrastructure
Participants across two focus groups (atmospheric and earth science, and chemistry) expressed concern about the long-term stability of their data in repositories. Some stated that their fear of a repository not being able to provide long-term preservation of their data led them to seek out and utilize alternative storage solutions. Others expected repositories to commit to the future of their data and have satisfactory funding structures to fulfill their stated missions. Participants described stable repository infrastructure in terms of updating data files (i.e., versioning) and formats over time and ensuring their usability.
Security
Participants across four focus groups (atmospheric and earth science, chemistry, computer science, and neuroscience) discussed wanting their data to be secure. They feared lax security could compromise their data. Specific to embargoed data, they feared lax security could enable “scooping” of research before data depositors are able to make use of the data through publication. Those handling data with confidential, sensitive or personally identifiable information expressed the most concern about potential security breaches because it could result in a breach and loss of trust with their current and future study participants, making it harder for themselves and future researchers to recruit study participants in the long-term, and it would result in noncompliance with mandates from their IRBs. | System Instruction: [For this task, you should answer questions only based on the information provided in the prompt. You are not allowed to use any internal information, prior knowledge, or external resources to answer questions. Do not exceed 250 words, and provide the answer in paragraph form.]
Question: [What are the ideal features that could be added to an institutional data repository that would make them more appealing/helpful to researchers?]
Context Block: [Scientists’ data practices
Participants across all the focus groups indicated having a DMP for at least one of their recent or current projects. Regarding data storage, some participants across four focus groups (atmosphere and earth science, chemistry, computer science, and neuroscience) used institutional repositories (IRs) for their data at some point within the data lifecycle, with five participants explicitly indicating use of IRs in their DMPs. The other popular choice discussed across four focus groups (atmospheric and earth science, computer science, ecology, and neuroscience) was proprietary cloud storage systems (e.g., DropBox, GitHub, and Google Drive). These users were concerned about file size limitations, costs, long-term preservation, data mining by the service providers, and the number of storage solutions becoming burdensome.
Desired repository features
Data traceability
Participants across four focus groups (atmosphere and earth science, chemistry, ecology, and neuroscience) mentioned wanting different kinds of information about how their data were being used to be tracked after data deposit in repositories. They wanted to know how many researchers view, cite, and publish based on the data they deposit. Additionally, participants wanted repositories to track any changes to their data post-deposit. For example, they suggested the creation of a path for updates to items in repositories after initial submission. They also wanted repositories to allow explicit versioning of their materials to clearly inform users of changes to materials over time. Relatedly, participants wanted repositories to provide notification systems for data depositors and users to know when new versions or derivative works based on their data become available as well as notifications for depositors about when their data has been viewed, cited, or included in a publication.
Metadata
Participants across three focus groups (atmospheric and earth science, chemistry, and neuroscience) discussed wanting high quality metadata within repositories. Some argued for automated metadata creation when uploading their data into repositories to save time and provide at least some level of description of their data (e.g., P1, P4, Chemistry). Within their own projects and in utilizing repositories, participants wanted help with metadata quality control issues. Participants within atmospheric and earth science who frequently created or interacted with complex files wanted expanded types of metadata (e.g., greater spatial metadata for geographic information system (GIS) data). Atmospheric and earth scientists, chemists, and neuroscientists wanted greater searchability and machine readability of data and entities within datasets housed in repositories, specifically to find a variable by multiple search parameters.
Data use restrictions
Participants across all five focus groups agreed that repositories need to clearly explain what a researcher can and cannot do with a dataset. For example, participants thought repositories should clearly state on every dataset whether researchers can: base new research on the data, publish based on the data, and use the data for business purposes. Participants stated current data restrictions can be confusing to those not acquainted with legal principles. For example, one data professional (P2, Chemistry) explained that researchers often mislabeled their datasets with ill-suited licenses. Participants commonly reported using Open Access or Creative Commons, but articulated the necessity of having the option for restrictive or proprietary licenses, although most had not used such licenses.
Some participants used embargoes and others never had. Most viewed embargoes as “a necessary evil,” provided that they are limited to approximately a few years after repository submission or until time of publication. Participants did not think it was fair to repository staff or potential data reusers to have any data embargoed in perpetuity.
Stable infrastructure
Participants across two focus groups (atmospheric and earth science, and chemistry) expressed concern about the long-term stability of their data in repositories. Some stated that their fear of a repository not being able to provide long-term preservation of their data led them to seek out and utilize alternative storage solutions. Others expected repositories to commit to the future of their data and have satisfactory funding structures to fulfill their stated missions. Participants described stable repository infrastructure in terms of updating data files (i.e., versioning) and formats over time and ensuring their usability.
Security
Participants across four focus groups (atmospheric and earth science, chemistry, computer science, and neuroscience) discussed wanting their data to be secure. They feared lax security could compromise their data. Specific to embargoed data, they feared lax security could enable “scooping” of research before data depositors are able to make use of the data through publication. Those handling data with confidential, sensitive or personally identifiable information expressed the most concern about potential security breaches because it could result in a breach and loss of trust with their current and future study participants, making it harder for themselves and future researchers to recruit study participants in the long-term, and it would result in noncompliance with mandates from their IRBs.] |
Limit your response to a maximum of 100 words. You may only respond to the prompt using information provided in the context block. If possible, use figures or percentages in your arguments. Don't use the word "medicine". | Why is access to information important when dealing with health products? | 15. The need for good governance is increasingly recognized as a major hurdle on the road to
achieving universal health coverage. Weak governance complicates access to health products by fuelling
inefficiencies, distorting competition and leaving the system vulnerable to undue influence, corruption,
waste, fraud and abuse. Given the large role of health products in the provision of health care and the
proportion of health spending they represent (as high as 60% for medicines in some countries),2
improving governance will help prevent the waste of public resources needed to sustain health systems
and provide quality and affordable care.
16. There is a pressing need to improve access to timely, robust and relevant information concerning
health products. Unbiased information that is free of any conflict of interest is vital for the sound
selection, incorporation, prescription and use of health products. Transparency of this information is
central to accountability, strengthens confidence in public institutions and improves the efficiency of
the system. Activities in the road map address the transparency of clinical trials enabling support for
clinical trial registries and address price transparency through the Market Information for Access to
Vaccines (MI4A platform),3
for example.
17. The relationship between government and the private sector, such as pharmaceutical companies
and medical device companies, requires particular attention. A question of growing importance is how
to support governments to work effectively with the private sector and develop public policy while
avoiding the risks of undue influence and maximizing benefits. WHO supports improving practices in
both the public and private sectors to ensure that national policies reflect the central role of access to
health products in achieving universal health coverage and in contributing to improved accountability. | system instruction: Limit your response to a maximum of 100 words. You may only respond to the prompt using information provided in the context block. If possible, use figures or percentages in your arguments. Don't use the word "medicine".
question: Why is access to information important when dealing with health products?
context block: [15. The need for good governance is increasingly recognized as a major hurdle on the road to
achieving universal health coverage. Weak governance complicates access to health products by fuelling
inefficiencies, distorting competition and leaving the system vulnerable to undue influence, corruption,
waste, fraud and abuse. Given the large role of health products in the provision of health care and the
proportion of health spending they represent (as high as 60% for medicines in some countries),2
improving governance will help prevent the waste of public resources needed to sustain health systems
and provide quality and affordable care.
16. There is a pressing need to improve access to timely, robust and relevant information concerning
health products. Unbiased information that is free of any conflict of interest is vital for the sound
selection, incorporation, prescription and use of health products. Transparency of this information is
central to accountability, strengthens confidence in public institutions and improves the efficiency of
the system. Activities in the road map address the transparency of clinical trials enabling support for
clinical trial registries and address price transparency through the Market Information for Access to
Vaccines (MI4A platform),3
for example.
17. The relationship between government and the private sector, such as pharmaceutical companies
and medical device companies, requires particular attention. A question of growing importance is how
to support governments to work effectively with the private sector and develop public policy while
avoiding the risks of undue influence and maximizing benefits. WHO supports improving practices in
both the public and private sectors to ensure that national policies reflect the central role of access to
health products in achieving universal health coverage and in contributing to improved accountability.] |
Respond to this prompt using only the information contained in the context as you are not an expert in this subject matter. | What does the context suggest are potential promising areas of research going forward? | Another approach commonly brought up by patients on LT4 with persistent complaints is the use of a combination therapy including LT4 and T3. This regimen was addressed by 14 randomized trials of the combination therapy that did not demonstrate benefit,37,44 and 5 other studies67–71 that reported some benefit.40 However, the study protocols differed in terms of design, including variable use of crossover or parallel groups, blinding, the ratio of T4 to T3 dosage, treatment duration as well as definitions of primary and secondary outcomes. In addition, some studies were subject to carryover effects, overtreatment, and limited inclusion of men and older age groups, underpowered sample size, short duration
and once daily T3 dosing. Consistently, 5 meta-analyses or reviews also suggested no clear advantage of the combination therapy.37,72–75 Importantly, potential long-term risks of T3 addition, such as cardiac arrhythmias, or decreased bone mineral density were not fully investigated. Therefore, Guidelines of the American Thyroid Association concluded that there is insufficient evidence to recommend the combination therapy. However, if such a therapy is chosen, it should resemble physiology, that is, the physiological molar T4 to T3 ratio of 14:1 to 15:1,37 and synthetic T4 to T3 conversion factor 3:1.76 Sustained release T3 formulations under development may help achieving physiological goals.
Interestingly, a benefit of a therapy containing T3 was shown in a subgroup analysis of patients who remained the most symptomatic while taking LT4. Therefore, this might be the group of patients that may need to be targeted in future, well designed and appropriately powered studies on the combination therapies.77 The subset of patients potentially benefiting from the combination therapy is likely to have a pathophysiological explanation, as it was shown that lower T3 levels during monotherapy with LT4 were associated with the presence of Thr92Ala polymorphism of deiodinase type 2 (DIO2 ) gene.78 Genotyping for the presence of Thr92Ala polymorphism in patients treated for hypothyroidism revealed
that Ala/Ala homozygotes had worse quality of life scores while taking LT4.79 In
addition, another small study showed that patients with both Thr92Ala polymorphism and a polymorphism in one of the thyroid hormone transporters (MTC10 ) preferred the combination therapy with both LT4 and T3.80 However, other studies did not confirm these findings.81–83 Hence, only the results from a new, prospective, well-designed, adequately powered study of the effects of DIO2 and MTC10 polymorphisms on response to therapy
can assess if this genetic background could be a marker guiding either a monotherapy or the combination therapy in overtly hypothyroid patients.
The role of surgery for HT has been traditionally limited to the patients presenting
with either pain or compressive symptoms due to goiter or co-existing malignant thyroid nodules.84 However, it was recently hypothesized that thyroidectomy might be a therapeutic modality used to reduce TPOAbs titers, as the presence of such antibodies is associated with lower quality of life even in euthyroid individuals. Consequently, a clinical trial addressed this concept, randomizing highly positive TPOAb patients with continued symptoms while receiving LT4 to either thyroidectomy or continued medical management. In those
who underwent thyroidectomy, TPOAbs significantly declined, quality of life and fatigue improved, and the effect was sustained at 12 to 18 month landmarks.85
Hashimoto thyroiditis and thyroid nodules. Based on evaluation of pathological specimens, the average prevalence of papillary thyroid cancer in patients with HT was around 27%, with an associated increased risk ratio of 1.59, as compared with the general population.86, 87 A recent meta-analysis that combined the studies analyzing cytological and pathological specimens derived from patients with
HT concluded that this association is based on low-to-moderate quality evidence.88 Apart from papillary thyroid cancer, a non-Hodgkin primary thyroid lymphoma was strongly associated with HT, with a risk of about 60 times higher than in the general population.32 Thyroid lymphoma accounts for approximately 5% of all thyroid neoplasms. Diagnosis of thyroid lymphoma is important to be established, as it changes the first line therapy from surgery, that is routinely implemented for malignant thyroid nodules, to appropriately targeted chemotherapy for lymphoproliferative disorders. Therapy of thyroid lymphoma and
malignant thyroid nodules is beyond the scope of this review, but can be found in the
respective guidelines.89
Hashimoto thyroiditis and pregnancy
The prevalence of TPOAbs in pregnant women is estimated to be 5%–14% and TgAbs are seen in 3%–18% of pregnant female individuals.90 The presence of these Abs indicating thyroid autoimmunity, is associated with a 2 to 4-fold increase in the risk of recurrent miscarriages91,92 and 2 to 3- fold increased risk of preterm birth.91,93,94 The mechanisms behind these adverse pregnancy outcomes in TPOAb positive euthyroid women are unclear but some authors postulate that TPOAbs might be markers for other forms of autoimmunity that target the placental-fetal unit.95 However, thyroid autoimmunity seems to have an
additive or synergistic effect on miscarriage 93 and prematurity 96 risk in women with maternal subclinical hypothyroidism. A recent meta-analysis including 19 cohort studies enrolling 47 045 pregnant women showed almost 3-fold increased risk of preterm birth in women with subclinical hypothyroidism and 1.5-fold increased risk of preterm birth in women with isolated hypothyroxinemia.94 Another meta-analysis of 26 studies found significant associations between maternal subclinical hypothyroidism or hypothyroxinemia and lower child IQ, language delay or global developmental delay as compared with children of euthyroid women.97
Overt hypothyroidism was associated with increased rates of gestational hypertension including preeclampsia and eclampsia, gestational diabetes, placental abruption, postpartum hemorrhage, preterm delivery, low birthweight, infant intensive care unit admissions, fetal death, and neurodevelopmental delays in the offspring.98,99,100 Therefore, overt hypothyroidism should be treated to prevent adverse effects on pregnancy and child developmental outcomes and should be started before conception to achieve biochemical euthyroidism.26 Therapy with LT4 improved success rate of in vitro fertilization in TPOAbs positive women with TSH above 2.5 mIU/ml.26 Importantly, women treated for hypothyroidism typically require a 20% to 30% increase in their LT4 dose, which usually
translates into addition of 2 pills per week early in the first trimester.26 The physiological explanation for increased thyroid hormone requirements is based upon several factors including increased hepatic thyroxine binding globulin synthesis and enhanced metabolism of thyroid hormone through its inactivation by the placental type 3 DIO.26,101 The use of T3 or T4+T3 combination therapy is not indicated in pregnancy, as liothyronine does not cross the blood-brain barrier to the fetal brain.102 LT4 replacement therapy should be monitored monthly, as over- and undertreatment lead to adverse pregnancy outcomes.26 The suggested
target TSH is within the lower half of the trimester-specific reference range or below 2.5 mIU/ml, if the trimester-specific ranges are not available.26
Regarding maternal subclinical hypothyroidism, the 2017 American Thyroid Association guidelines recommend utilizing TPOAb status along with serum levels of TSH to guide treatment decisions (TABLE 2).26 LT4 therapy is not recommended for isolated hypothyroxinemia.26 A 2021 systematic review and meta-analysis of 6 randomized controlled trials assessing the effect of LT4 treatment in euthyroid women with thyroid autoimmunity did not find any significant differences in the relative risk of miscarriage and preterm delivery, or outcomes with live birth. Therefore, no strong recommendations regarding the therapy in such scenarios could be made, but consideration on a case-by-case basis might be
implemented (TABLE 2).103
Areas of research
There are promising new models being developed to study the pathophysiology of
thyroid disease, as functional thyroid follicles from embryonic or pluripotent stem
cells were established in animal models.104,105 This potentially allows for studying
mechanisms of autoimmunity that could guide prevention of the disease progression to overt hypothyroidism in predisposed individuals. Stem cells could be also used in regenerative medicine to replace those destroyed by the autoimmune processes in the thyroid gland. A better understanding of the response to therapy with thyroid hormones might be achieved from studies focusing on transcriptome profiling of expression of genes responsive to thyroid hormone action. This could help titrating thyroid hormone replacement therapy. New preparations of sustained release T3 have successfully passed phase 1 clinical trials and may add to our armamentarium for HT therapy once necessary efficacy trials are completed. | Respond to this prompt using only the information contained in the context as you are not an expert in this subject matter.
What does the context suggest are potential promising areas of research going forward?
Another approach commonly brought up by patients on LT4 with persistent complaints is the use of a combination therapy including LT4 and T3. This regimen was addressed by 14 randomized trials of the combination therapy that did not demonstrate benefit,37,44 and 5 other studies67–71 that reported some benefit.40 However, the study protocols differed in terms of design, including variable use of crossover or parallel groups, blinding, the ratio of T4 to T3 dosage, treatment duration as well as definitions of primary and secondary outcomes. In addition, some studies were subject to carryover effects, overtreatment, and limited inclusion of men and older age groups, underpowered sample size, short duration
and once daily T3 dosing. Consistently, 5 meta-analyses or reviews also suggested no clear advantage of the combination therapy.37,72–75 Importantly, potential long-term risks of T3 addition, such as cardiac arrhythmias, or decreased bone mineral density were not fully investigated. Therefore, Guidelines of the American Thyroid Association concluded that there is insufficient evidence to recommend the combination therapy. However, if such a therapy is chosen, it should resemble physiology, that is, the physiological molar T4 to T3 ratio of 14:1 to 15:1,37 and synthetic T4 to T3 conversion factor 3:1.76 Sustained release T3 formulations under development may help achieving physiological goals.
Interestingly, a benefit of a therapy containing T3 was shown in a subgroup analysis of patients who remained the most symptomatic while taking LT4. Therefore, this might be the group of patients that may need to be targeted in future, well designed and appropriately powered studies on the combination therapies.77 The subset of patients potentially benefiting from the combination therapy is likely to have a pathophysiological explanation, as it was shown that lower T3 levels during monotherapy with LT4 were associated with the presence of Thr92Ala polymorphism of deiodinase type 2 (DIO2 ) gene.78 Genotyping for the presence of Thr92Ala polymorphism in patients treated for hypothyroidism revealed
that Ala/Ala homozygotes had worse quality of life scores while taking LT4.79 In
addition, another small study showed that patients with both Thr92Ala polymorphism and a polymorphism in one of the thyroid hormone transporters (MTC10 ) preferred the combination therapy with both LT4 and T3.80 However, other studies did not confirm these findings.81–83 Hence, only the results from a new, prospective, well-designed, adequately powered study of the effects of DIO2 and MTC10 polymorphisms on response to therapy
can assess if this genetic background could be a marker guiding either a monotherapy or the combination therapy in overtly hypothyroid patients.
The role of surgery for HT has been traditionally limited to the patients presenting
with either pain or compressive symptoms due to goiter or co-existing malignant thyroid nodules.84 However, it was recently hypothesized that thyroidectomy might be a therapeutic modality used to reduce TPOAbs titers, as the presence of such antibodies is associated with lower quality of life even in euthyroid individuals. Consequently, a clinical trial addressed this concept, randomizing highly positive TPOAb patients with continued symptoms while receiving LT4 to either thyroidectomy or continued medical management. In those
who underwent thyroidectomy, TPOAbs significantly declined, quality of life and fatigue improved, and the effect was sustained at 12 to 18 month landmarks.85
Hashimoto thyroiditis and thyroid nodules. Based on evaluation of pathological specimens, the average prevalence of papillary thyroid cancer in patients with HT was around 27%, with an associated increased risk ratio of 1.59, as compared with the general population.86, 87 A recent meta-analysis that combined the studies analyzing cytological and pathological specimens derived from patients with
HT concluded that this association is based on low-to-moderate quality evidence.88 Apart from papillary thyroid cancer, a non-Hodgkin primary thyroid lymphoma was strongly associated with HT, with a risk of about 60 times higher than in the general population.32 Thyroid lymphoma accounts for approximately 5% of all thyroid neoplasms. Diagnosis of thyroid lymphoma is important to be established, as it changes the first line therapy from surgery, that is routinely implemented for malignant thyroid nodules, to appropriately targeted chemotherapy for lymphoproliferative disorders. Therapy of thyroid lymphoma and
malignant thyroid nodules is beyond the scope of this review, but can be found in the
respective guidelines.89
Hashimoto thyroiditis and pregnancy
The prevalence of TPOAbs in pregnant women is estimated to be 5%–14% and TgAbs are seen in 3%–18% of pregnant female individuals.90 The presence of these Abs indicating thyroid autoimmunity, is associated with a 2 to 4-fold increase in the risk of recurrent miscarriages91,92 and 2 to 3- fold increased risk of preterm birth.91,93,94 The mechanisms behind these adverse pregnancy outcomes in TPOAb positive euthyroid women are unclear but some authors postulate that TPOAbs might be markers for other forms of autoimmunity that target the placental-fetal unit.95 However, thyroid autoimmunity seems to have an
additive or synergistic effect on miscarriage 93 and prematurity 96 risk in women with maternal subclinical hypothyroidism. A recent meta-analysis including 19 cohort studies enrolling 47 045 pregnant women showed almost 3-fold increased risk of preterm birth in women with subclinical hypothyroidism and 1.5-fold increased risk of preterm birth in women with isolated hypothyroxinemia.94 Another meta-analysis of 26 studies found significant associations between maternal subclinical hypothyroidism or hypothyroxinemia and lower child IQ, language delay or global developmental delay as compared with children of euthyroid women.97
Overt hypothyroidism was associated with increased rates of gestational hypertension including preeclampsia and eclampsia, gestational diabetes, placental abruption, postpartum hemorrhage, preterm delivery, low birthweight, infant intensive care unit admissions, fetal death, and neurodevelopmental delays in the offspring.98,99,100 Therefore, overt hypothyroidism should be treated to prevent adverse effects on pregnancy and child developmental outcomes and should be started before conception to achieve biochemical euthyroidism.26 Therapy with LT4 improved success rate of in vitro fertilization in TPOAbs positive women with TSH above 2.5 mIU/ml.26 Importantly, women treated for hypothyroidism typically require a 20% to 30% increase in their LT4 dose, which usually
translates into addition of 2 pills per week early in the first trimester.26 The physiological explanation for increased thyroid hormone requirements is based upon several factors including increased hepatic thyroxine binding globulin synthesis and enhanced metabolism of thyroid hormone through its inactivation by the placental type 3 DIO.26,101 The use of T3 or T4+T3 combination therapy is not indicated in pregnancy, as liothyronine does not cross the blood-brain barrier to the fetal brain.102 LT4 replacement therapy should be monitored monthly, as over- and undertreatment lead to adverse pregnancy outcomes.26 The suggested
target TSH is within the lower half of the trimester-specific reference range or below 2.5 mIU/ml, if the trimester-specific ranges are not available.26
Regarding maternal subclinical hypothyroidism, the 2017 American Thyroid Association guidelines recommend utilizing TPOAb status along with serum levels of TSH to guide treatment decisions (TABLE 2).26 LT4 therapy is not recommended for isolated hypothyroxinemia.26 A 2021 systematic review and meta-analysis of 6 randomized controlled trials assessing the effect of LT4 treatment in euthyroid women with thyroid autoimmunity did not find any significant differences in the relative risk of miscarriage and preterm delivery, or outcomes with live birth. Therefore, no strong recommendations regarding the therapy in such scenarios could be made, but consideration on a case-by-case basis might be
implemented (TABLE 2).103
Areas of research
There are promising new models being developed to study the pathophysiology of
thyroid disease, as functional thyroid follicles from embryonic or pluripotent stem
cells were established in animal models.104,105 This potentially allows for studying
mechanisms of autoimmunity that could guide prevention of the disease progression to overt hypothyroidism in predisposed individuals. Stem cells could be also used in regenerative medicine to replace those destroyed by the autoimmune processes in the thyroid gland. A better understanding of the response to therapy with thyroid hormones might be achieved from studies focusing on transcriptome profiling of expression of genes responsive to thyroid hormone action. This could help titrating thyroid hormone replacement therapy. New preparations of sustained release T3 have successfully passed phase 1 clinical trials and may add to our armamentarium for HT therapy once necessary efficacy trials are completed. |
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | I would like to know about the idea of holistic healthcare within business. How does holistic healthcare affect businesses and employees? What are the health benefits? | Holistic healthcare recognises the connection and balance between physical, mental, and spiritual wellbeing. Rather than concentrating on episodic care, it looks at the interaction between different conditions and wellbeing factors to treat a whole person. It aims to prevent, as well as treat, health issues.
Ten years ago, NHS England, along with other national partners, signed up to a series of commitments to support integrated care. The purpose included:
improve outcomes in population health and healthcare
tackle inequalities in outcomes, experience and access
enhance productivity and value for money
Today this same challenge and opportunity is set for employers. Eighty seven percent of employees expect their employer to support them in balancing work and personal commitments. Yet nearly one in five employers aren't doing anything to improve employee health and wellbeing.
There are significant benefits to proactively addressing workforce wellbeing. And the most valuable approach is a holistic one. Let’s explore why.
For senior HR leaders, taking a holistic approach to wellbeing comes with a variety of advantages. According to the CIPD’s Health and wellbeing at work 2022 report:
48 percent of HR leaders agree their organisation’s employee health and wellbeing strategy has created a healthier and more inclusive culture.
46 percent agree that it has created better employee morale and engagement.
33 percent agree that it has created more effective working relationships.
27 percent agree that it has improved productivity.
Over the next decade, there will be 3.7 million more workers aged between 50 and the state pension age. At the same time, Generations Z and Alpha are establishing their place in the workforce.
A holistic approach to health is an excellent way to support an inclusive workplace. There's no ‘one-size-fits-all’ when it comes to health. What it means to ‘live well’ looks different for different people. So, you need to find a way to cater to every individual's unique needs.
Currently, only half of all organisations take a strategic approach to employee wellbeing. Over a third remain reactive to employee needs. For you to stand out, you need to actively listen to every employee and find ways to serve their needs. And having done so, you then need to find wellbeing solutions that cater to those requirements.
The bottom line? Develop a wellness strategy that has the flexibility to meet myriad needs and you will start to see tangible benefits.
The connection between mind and body is indisputable. Many studies show that physical wellness is directly influenced by mental wellness, and vice versa. In fact, having a serious mental illness can reduce your life expectancy by 10 to 20 years due to the impact it has across your body. This includes increased risk of heart disease, as well as a possible increase in your risk of cancer.
In the workplace, mental health concerns are the top cause of long-term employee absences at work. What's more, psychological conditions like severe anxiety and depression impact creativity and productivity.
79 percent of UK adults feel stressed at least once per month. And approximately two in three employees believe work is a significant source of stress. As an HR leader, it’s imperative you understand and advocate for mind-body wellness at work. Find creative ways to promote holistic health strategies and offer teams the relevant support to ensure they bring their best selves to work.
33 percent of workers report that workplace stress decreases productivity. It’s therefore critical that you find ways to address it. What’s more, happier employees are approximately 12 percent more productive.
Holistic health is important because it acts as a core enabler of employee happiness and productivity. There is an indisputable connection between good health and wellbeing and a reduction in stress. And unsurprisingly, reducing stress boosts happiness, which in turn increases productivity.
In addition to employee happiness is employee health. In the UK, musculoskeletal (MSK) conditions affect 1 in 4 of the adult population. A large proportion of these conditions affect young working people who experience daily symptoms including pain, stiffness, and limited movement.
Taking a holistic approach to health helps employees better manage not only the symptoms that arise from conditions like MSK, but also the root causes. These often include repetitive daily motion, inactivity, and overwork. Plus, it helps them manage the emotional burden that comes with chronic pain.
Presenteeism can cost your company £4,000 in lost business per employee, each year. Despite this, only 30 percent of HR leaders report their organisation has taken steps to tackle it.
You can reduce instances of presenteeism and leavism with a holistic wellbeing strategy. This ensures employees take the time they need to recover and return to work stronger than ever. It also allows them to then maintain their wellbeing while at work. This reduces the risk of relapse and increases focus and productivity.
Stressed employees are more than three times as likely to seek employment elsewhere compared to their less-stressed co-workers.
To reduce the risk of turnover, ensure managers offer regular check-ins with employees. This helps you monitor employee wellbeing and mitigate any impending departures.
Your commitment to the health of your workforce goes a long way in creating trust and respect. In turn, this generates engagement and builds loyalty.
A holistic healthcare strategy balances physical, emotional, and mental wellbeing. In turn, this encourages employees to take care of their entire selves. And in return, your employees will bring their entire selves into the workplace.
Our wellbeing platform offers employees the opportunity to receive personalised wellbeing advice within a single, streamlined solution. It equips employees with everything they need to take care of their whole health. This includes regular check-ins and engaging self-guided programmes. It also includes personalised chatbots, and access to specialist follow-up care when needed. | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
I would like to know about the idea of holistic healthcare within business. How does holistic healthcare affect businesses and employees? What are the health benefits?
Holistic healthcare recognises the connection and balance between physical, mental, and spiritual wellbeing. Rather than concentrating on episodic care, it looks at the interaction between different conditions and wellbeing factors to treat a whole person. It aims to prevent, as well as treat, health issues.
Ten years ago, NHS England, along with other national partners, signed up to a series of commitments to support integrated care. The purpose included:
improve outcomes in population health and healthcare
tackle inequalities in outcomes, experience and access
enhance productivity and value for money
Today this same challenge and opportunity is set for employers. Eighty seven percent of employees expect their employer to support them in balancing work and personal commitments. Yet nearly one in five employers aren't doing anything to improve employee health and wellbeing.
There are significant benefits to proactively addressing workforce wellbeing. And the most valuable approach is a holistic one. Let’s explore why.
For senior HR leaders, taking a holistic approach to wellbeing comes with a variety of advantages. According to the CIPD’s Health and wellbeing at work 2022 report:
48 percent of HR leaders agree their organisation’s employee health and wellbeing strategy has created a healthier and more inclusive culture.
46 percent agree that it has created better employee morale and engagement.
33 percent agree that it has created more effective working relationships.
27 percent agree that it has improved productivity.
Over the next decade, there will be 3.7 million more workers aged between 50 and the state pension age. At the same time, Generations Z and Alpha are establishing their place in the workforce.
A holistic approach to health is an excellent way to support an inclusive workplace. There's no ‘one-size-fits-all’ when it comes to health. What it means to ‘live well’ looks different for different people. So, you need to find a way to cater to every individual's unique needs.
Currently, only half of all organisations take a strategic approach to employee wellbeing. Over a third remain reactive to employee needs. For you to stand out, you need to actively listen to every employee and find ways to serve their needs. And having done so, you then need to find wellbeing solutions that cater to those requirements.
The bottom line? Develop a wellness strategy that has the flexibility to meet myriad needs and you will start to see tangible benefits.
The connection between mind and body is indisputable. Many studies show that physical wellness is directly influenced by mental wellness, and vice versa. In fact, having a serious mental illness can reduce your life expectancy by 10 to 20 years due to the impact it has across your body. This includes increased risk of heart disease, as well as a possible increase in your risk of cancer.
In the workplace, mental health concerns are the top cause of long-term employee absences at work. What's more, psychological conditions like severe anxiety and depression impact creativity and productivity.
79 percent of UK adults feel stressed at least once per month. And approximately two in three employees believe work is a significant source of stress. As an HR leader, it’s imperative you understand and advocate for mind-body wellness at work. Find creative ways to promote holistic health strategies and offer teams the relevant support to ensure they bring their best selves to work.
33 percent of workers report that workplace stress decreases productivity. It’s therefore critical that you find ways to address it. What’s more, happier employees are approximately 12 percent more productive.
Holistic health is important because it acts as a core enabler of employee happiness and productivity. There is an indisputable connection between good health and wellbeing and a reduction in stress. And unsurprisingly, reducing stress boosts happiness, which in turn increases productivity.
In addition to employee happiness is employee health. In the UK, musculoskeletal (MSK) conditions affect 1 in 4 of the adult population. A large proportion of these conditions affect young working people who experience daily symptoms including pain, stiffness, and limited movement.
Taking a holistic approach to health helps employees better manage not only the symptoms that arise from conditions like MSK, but also the root causes. These often include repetitive daily motion, inactivity, and overwork. Plus, it helps them manage the emotional burden that comes with chronic pain.
Presenteeism can cost your company £4,000 in lost business per employee, each year. Despite this, only 30 percent of HR leaders report their organisation has taken steps to tackle it.
You can reduce instances of presenteeism and leavism with a holistic wellbeing strategy. This ensures employees take the time they need to recover and return to work stronger than ever. It also allows them to then maintain their wellbeing while at work. This reduces the risk of relapse and increases focus and productivity.
Stressed employees are more than three times as likely to seek employment elsewhere compared to their less-stressed co-workers.
To reduce the risk of turnover, ensure managers offer regular check-ins with employees. This helps you monitor employee wellbeing and mitigate any impending departures.
Your commitment to the health of your workforce goes a long way in creating trust and respect. In turn, this generates engagement and builds loyalty.
A holistic healthcare strategy balances physical, emotional, and mental wellbeing. In turn, this encourages employees to take care of their entire selves. And in return, your employees will bring their entire selves into the workplace.
Our wellbeing platform offers employees the opportunity to receive personalised wellbeing advice within a single, streamlined solution. It equips employees with everything they need to take care of their whole health. This includes regular check-ins and engaging self-guided programmes. It also includes personalised chatbots, and access to specialist follow-up care when needed.
https://www.healthhero.com/blog/what-is-holistic-healthcare-and-why-is-it-important |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | I read this article about genetic cancer. Can you explain how family cancer syndrome works? I'm thinking of starting a family in a few years, but need to know the pros and cons of cancer genetic testing. What should affect my decision to get testing at all? I also need to know how to prevent various cancer genetic changes from occurring so that the whole family can be safe. | Cancer-related genetic changes can occur because:
random mistakes in our DNA happen as our cells multiply
our DNA is altered by carcinogens in our environment, such as chemicals in tobacco smoke, UV rays from the sun, and the human papillomavirus (HPV)
they were inherited from one of our parents
DNA changes, whether caused by a random mistake or by a carcinogen, can happen throughout our lives and even in the womb. While most genetic changes aren’t harmful on their own, an accumulation of genetic changes over many years can turn healthy cells into cancerous cells. The vast majority of cancers occur by chance as a result of this process over time.
Cancer itself can’t be passed down from parents to children. And genetic changes in tumor cells can’t be passed down. But a genetic change that increases the risk of cancer can be passed down (inherited) if it is present in a parent's egg or sperm cells.
For example, if a parent passes a mutated BRCA1 or BRCA2 gene to their child, the child will have a much higher risk of developing breast and several other cancers.
That’s why cancer sometimes appears to run in families. Up to 10% of all cancers may be caused by inherited genetic changes.
Inheriting a cancer-related genetic change doesn’t mean you will definitely get cancer. It means that your risk of getting cancer is increased.
A family cancer syndrome, also called a hereditary cancer syndrome, is a rare disorder in which family members have a higher-than-average risk of developing a certain type or types of cancer. Family cancer syndromes are caused by inherited genetic variants in certain cancer-related genes.
With some family cancer syndromes, people tend to develop cancer at an early age or have other noncancer health conditions.
For example, familial adenomatous polyposis (FAP) is a family cancer syndrome caused by certain inherited changes in the APC gene. People with FAP have a very high chance of developing colorectal cancer at an early age and are also at risk of developing other kinds of cancer.
But not all cancers that appear to “run in families” are caused by family cancer syndromes. A shared environment or habits, such as exposure to air pollution or tobacco use, may cause the same kind of cancer to develop among family members.
Also, multiple family members may develop common cancers, such as prostate cancer, just by chance. Cancer can also run in a family if family members have a combination of many genetic variants that each have a very small cancer risk.
Certain genetic tests can show if you’ve inherited a genetic change that increases your risk of cancer. This testing is usually done with a small sample of blood, but it can sometimes be done with saliva, cells from inside the cheek, or skin cells.
Not everyone needs to get genetic testing for cancer risk. Your doctor or health care provider can help you decide if you should get tested for genetic changes that increase cancer risk. They will likely ask if you have certain patterns in your personal or family medical history, such as cancer at an unusually young age or several relatives with the same kind of cancer.
If your doctor recommends genetic testing, talking with a genetic counselor can help you consider the potential risks, benefits, and drawbacks of genetic testing in your situation. After testing, a genetic counselor, doctor, or other health care professional trained in genetics can help you understand what the test results mean for you and for your family members.
Although it’s possible to order an at-home genetic test on your own, these tests have many drawbacks and are not generally recommended as a way to see whether you have inherited a genetic change that increases cancer risk.
If you have cancer, a different type of genetic test called a biomarker test can identify genetic changes that may be driving the growth of your cancer. This information can help your doctors decide which therapy might work best for you or if you may be able to enroll in a particular clinical trial. For more information, see Biomarker Testing for Cancer Treatment. Biomarker testing may also be called tumor profiling or molecular profiling.
Biomarker testing is different from the genetic testing that is used to find out if you have an inherited genetic change that makes you more likely to get cancer. Biomarker testing is done using a sample of your cancer cells—either a small piece of a tumor or a sample of your blood.
In some cases, the results of a biomarker test might suggest that you have an inherited mutation that increases cancer risk. If that happens, you may need to get another genetic test to confirm whether you truly have an inherited mutation that increases cancer risk.
Genetic changes can lead to cancer if they alter the way your cells grow and spread. Most cancer-causing DNA changes occur in genes, which are sections of DNA that carry the instructions to make proteins or specialized RNA such as microRNA.
For example, some DNA changes raise the levels of proteins that tell cells to keep growing. Other DNA changes lower the levels of proteins that tell cells when to stop growing. And some DNA changes stop proteins that tell cells to self-destruct when they are damaged.
For a healthy cell to turn cancerous, scientists think that more than one DNA change has to occur. People who have inherited a cancer-related genetic change need fewer additional changes to develop cancer. However, they may never develop these changes or get cancer.
As cancer cells divide, they acquire more DNA changes over time. Two cancer cells in the same tumor can have different DNA changes. In addition, every person with cancer has a unique combination of DNA changes in their cancer.
Multiple kinds of genetic changes can lead to cancer. One genetic change, called a DNA mutation or genetic variant, is a change in the DNA code, like a typo in the sequence of DNA letters.
Some variants affect just one DNA letter, called a nucleotide. A nucleotide may be missing, or it may be replaced by another nucleotide. These are called point mutations.
For example, around 5% of people with cancer have a point mutation in the KRAS gene that replaces the DNA letter G with AExit Disclaimer. This single letter change creates an abnormal KRAS protein that constantly tells cells to grow.
Cancer-causing genetic changes can also occur when segments of DNA—sometimes very large ones—are rearranged, deleted, or copied. These are called chromosomal rearrangements.
For example, most chronic myelogenous leukemias (a type of blood cancer) are caused by a chromosomal rearrangement that places part of the BCR gene next to the ABL gene. This rearrangement creates an abnormal protein, called BCR-ABL, that makes leukemia cells grow out of control.
Some cancer-causing DNA changes occur outside genes, in sections of DNA that act like “on” or “off” switches for nearby genes. For example, some brain cancer cells have multiple copies of “on” switches next to genes that drive cell growth.
Other DNA changes, known as epigenetic changes, can also cause cancer. Unlike genetic variants, epigenetic changes (sometimes called epimutations) may be reversible and they don’t affect the DNA code. Instead, epigenetic changes affect how DNA is packed into the nucleus. By changing how DNA is packaged, epigenetic changes can alter how much protein a gene makes.
Some substances and chemicals in the environment that cause genetic changes can also cause epigenetic changes, such as tobacco smoke, heavy metals like cadmium, and viruses like Epstein-Barr virus. | [question]
I read this article about genetic cancer. Can you explain how family cancer syndrome works? I'm thinking of starting a family in a few years, but need to know the pros and cons of cancer genetic testing. What should affect my decision to get testing at all? I also need to know how to prevent various cancer genetic changes from occurring so that the whole family can be safe.
=====================
[text]
Cancer-related genetic changes can occur because:
random mistakes in our DNA happen as our cells multiply
our DNA is altered by carcinogens in our environment, such as chemicals in tobacco smoke, UV rays from the sun, and the human papillomavirus (HPV)
they were inherited from one of our parents
DNA changes, whether caused by a random mistake or by a carcinogen, can happen throughout our lives and even in the womb. While most genetic changes aren’t harmful on their own, an accumulation of genetic changes over many years can turn healthy cells into cancerous cells. The vast majority of cancers occur by chance as a result of this process over time.
Cancer itself can’t be passed down from parents to children. And genetic changes in tumor cells can’t be passed down. But a genetic change that increases the risk of cancer can be passed down (inherited) if it is present in a parent's egg or sperm cells.
For example, if a parent passes a mutated BRCA1 or BRCA2 gene to their child, the child will have a much higher risk of developing breast and several other cancers.
That’s why cancer sometimes appears to run in families. Up to 10% of all cancers may be caused by inherited genetic changes.
Inheriting a cancer-related genetic change doesn’t mean you will definitely get cancer. It means that your risk of getting cancer is increased.
A family cancer syndrome, also called a hereditary cancer syndrome, is a rare disorder in which family members have a higher-than-average risk of developing a certain type or types of cancer. Family cancer syndromes are caused by inherited genetic variants in certain cancer-related genes.
With some family cancer syndromes, people tend to develop cancer at an early age or have other noncancer health conditions.
For example, familial adenomatous polyposis (FAP) is a family cancer syndrome caused by certain inherited changes in the APC gene. People with FAP have a very high chance of developing colorectal cancer at an early age and are also at risk of developing other kinds of cancer.
But not all cancers that appear to “run in families” are caused by family cancer syndromes. A shared environment or habits, such as exposure to air pollution or tobacco use, may cause the same kind of cancer to develop among family members.
Also, multiple family members may develop common cancers, such as prostate cancer, just by chance. Cancer can also run in a family if family members have a combination of many genetic variants that each have a very small cancer risk.
Certain genetic tests can show if you’ve inherited a genetic change that increases your risk of cancer. This testing is usually done with a small sample of blood, but it can sometimes be done with saliva, cells from inside the cheek, or skin cells.
Not everyone needs to get genetic testing for cancer risk. Your doctor or health care provider can help you decide if you should get tested for genetic changes that increase cancer risk. They will likely ask if you have certain patterns in your personal or family medical history, such as cancer at an unusually young age or several relatives with the same kind of cancer.
If your doctor recommends genetic testing, talking with a genetic counselor can help you consider the potential risks, benefits, and drawbacks of genetic testing in your situation. After testing, a genetic counselor, doctor, or other health care professional trained in genetics can help you understand what the test results mean for you and for your family members.
Although it’s possible to order an at-home genetic test on your own, these tests have many drawbacks and are not generally recommended as a way to see whether you have inherited a genetic change that increases cancer risk.
If you have cancer, a different type of genetic test called a biomarker test can identify genetic changes that may be driving the growth of your cancer. This information can help your doctors decide which therapy might work best for you or if you may be able to enroll in a particular clinical trial. For more information, see Biomarker Testing for Cancer Treatment. Biomarker testing may also be called tumor profiling or molecular profiling.
Biomarker testing is different from the genetic testing that is used to find out if you have an inherited genetic change that makes you more likely to get cancer. Biomarker testing is done using a sample of your cancer cells—either a small piece of a tumor or a sample of your blood.
In some cases, the results of a biomarker test might suggest that you have an inherited mutation that increases cancer risk. If that happens, you may need to get another genetic test to confirm whether you truly have an inherited mutation that increases cancer risk.
Genetic changes can lead to cancer if they alter the way your cells grow and spread. Most cancer-causing DNA changes occur in genes, which are sections of DNA that carry the instructions to make proteins or specialized RNA such as microRNA.
For example, some DNA changes raise the levels of proteins that tell cells to keep growing. Other DNA changes lower the levels of proteins that tell cells when to stop growing. And some DNA changes stop proteins that tell cells to self-destruct when they are damaged.
For a healthy cell to turn cancerous, scientists think that more than one DNA change has to occur. People who have inherited a cancer-related genetic change need fewer additional changes to develop cancer. However, they may never develop these changes or get cancer.
As cancer cells divide, they acquire more DNA changes over time. Two cancer cells in the same tumor can have different DNA changes. In addition, every person with cancer has a unique combination of DNA changes in their cancer.
Multiple kinds of genetic changes can lead to cancer. One genetic change, called a DNA mutation or genetic variant, is a change in the DNA code, like a typo in the sequence of DNA letters.
Some variants affect just one DNA letter, called a nucleotide. A nucleotide may be missing, or it may be replaced by another nucleotide. These are called point mutations.
For example, around 5% of people with cancer have a point mutation in the KRAS gene that replaces the DNA letter G with AExit Disclaimer. This single letter change creates an abnormal KRAS protein that constantly tells cells to grow.
Cancer-causing genetic changes can also occur when segments of DNA—sometimes very large ones—are rearranged, deleted, or copied. These are called chromosomal rearrangements.
For example, most chronic myelogenous leukemias (a type of blood cancer) are caused by a chromosomal rearrangement that places part of the BCR gene next to the ABL gene. This rearrangement creates an abnormal protein, called BCR-ABL, that makes leukemia cells grow out of control.
Some cancer-causing DNA changes occur outside genes, in sections of DNA that act like “on” or “off” switches for nearby genes. For example, some brain cancer cells have multiple copies of “on” switches next to genes that drive cell growth.
Other DNA changes, known as epigenetic changes, can also cause cancer. Unlike genetic variants, epigenetic changes (sometimes called epimutations) may be reversible and they don’t affect the DNA code. Instead, epigenetic changes affect how DNA is packed into the nucleus. By changing how DNA is packaged, epigenetic changes can alter how much protein a gene makes.
Some substances and chemicals in the environment that cause genetic changes can also cause epigenetic changes, such as tobacco smoke, heavy metals like cadmium, and viruses like Epstein-Barr virus.
https://www.cancer.gov/about-cancer/causes-prevention/genetics
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Your response to the user must only use the information provided in the prompt context. Focus on terms and definitions whenever possible, explaining complex concepts using short sentences. Do not use vocabulary that requires advanced education to know, and if complex terms are used, they need to be defined in-place using phrasing a person with a high-school degree would understand. Limit your response to 100 words maximum, and answer the user's question with a well-organized list if they ask for definitions. If you cannot answer the user's question using the context alone, respond simply with "I do not have enough context to provide an answer at this time." | Which model showed debt underutilization in this study? | Since Modigliani and Miller (1958), economists have relaxed many of their
assumptions to understand the observed behavior of leverage ratios. Arguably,
the trade-off theory has emerged as one of the leading paradigms, even though
it has been often challenged by empirical tests that appear to favor other
theories or suggest taxes are not that important. Therefore, there is still no
consensus in the literature. Moreover, none of the extant theories jointly
address the following questions in a parsimonious and simple framework:
(1) why firms tend to use debt financing so conservatively, (2) whether there
is indeed a target leverage ratio and partial adjustment toward it, (3) why the leverage-growth relation is negative, and (4) why average leverage paths
persist for over two decades.
To answers these questions, we extend Hackbarth and Mauer (2012) to
multiple financing and investment decisions that maximize initial value. We
develop two versions of a dynamic model with endogenous financing and
investment decisions. While the multistage model features two sequentially
exercisable investment options, the single-stage model has only one investment option. The single-stage model serves as a benchmark to gauge
investment-financing interactions in the multistage model. In both versions,
the capital expenditure is funded by a mix of debt and equity. This mixture not
only trades off tax benefits of debt against bankruptcy costs (triggered by an
endogenous default decision) but also recognizes financial flexibility in the
multistage model.
The solution of the dynamic model offers a rich set of novel predictions that
link the behavior of a firm’s leverage ratios to its investment opportunities.
First, a role for financial flexibility emerges endogenously because dynamic
financing-investment interactions between stages lead to an “intertemporal
effect” in the multistage model: reaping investment (i.e., cash flow) and tax
benefits sooner by issuing more debt in the first stage to fund the investment
cost reduces financial flexibility for funding more of the investment cost with
debt in the second stage. In comparison to the single-stage model, firms
underutilize debt in the multistage model when financing investment the first
time to retain financial flexibility. Because both debt issues jointly optimize
initial equity value (and hence internalize dilutive externalities on each other),
underutilization of debt persists when firms mature (i.e., exercise their last
investment options), and underutilization is more (less) severe for more backloaded (front-loaded) investment opportunities. It is worth noting that leverage
does not vary with investment in the single-stage model. Only in the multistage model leverage dynamics crucially hinge on the structure of the investment process in that it creates significant variation target leverage ratios.1
Second, optimizing behavior by firms in a dynamic trade-off model with
investment generates a significant fraction of low or zero leverage firms and
path-dependent, persistent leverage ratios. Our analysis shows how incentives
to retain financial flexibility in the first stage crucially depend on the structure
of the investment process. Given the wide range of optimal target leverage
ratios, the model suggests that leverage ratios can greatly vary depending on
how the firm grows assets-in-place by exercising its real options.
Third, structural models without dynamic financing-investment interactions
(1) overestimate target leverage ratios, and (2) can be misleading in that they
imply a fixed target leverage ratio that is largely taken to be exogenous to the
investment process. It thus seems difficult to determine target leverage in the conventional sense. This also suggests that there is no meaningful measurement of partial adjustment toward target leverage (as in, e.g., Flannery and
Rangan 2006) without recognizing the structure of the investment process.2
To test the model’s ability to match observed outcomes, we estimate key
model parameters via simulated method of moments (SMM). Intuitively,
SMM finds the set of parameters, which minimizes the difference of the
simulated model moments and the data moments from COMPUSTAT’s
annual tapes for the period of 1965 to 2009. We then split the full sample
into low, medium, and high market-to-book (or q) subsamples and employ
SMM also to fit the four parameters for each subsample. We split the sample
based on q to proxy for investment opportunities. Low q firms tend to have
fewer investment opportunities, whereas high q firms tend to have more
investment opportunities. Therefore, the relative value of q is informative
about the structure of the investment process in the real data. Our estimation
results reveal that high q firms have the most back-loaded investment processes, and low q firms have the most front-loaded ones.
Graham (2000) reports that firms, even stable and profitable, use less debt
than predicted by the static view of the tax benefits of debt. Two of five firms
have an average leverage ratio of less than 20%, and the median firm uses
only 31.4% leverage over the 1965 to 2000 period, which implies a “low
leverage puzzle.” More recently, Strebulaev and Yang (2013) find that on
average 10% of firms have zero leverage and almost 22% firms have less
than 5% quasi-market leverage, which represents a “zero leverage puzzle.”
We emphasize the importance of real frictions in a dynamic trade-off model
and thereby provide an economically meaningful mechanism for why firms
tend to use debt financing so conservatively. Based on the structural estimation results for the full sample, the simulated economies feature a significant
fraction of low (and zero) leverage firms. Moreover, in contrast to much
higher point estimates in prior studies, we report, on average, 20% leverage
in dynamics (i.e., for all firms) and 19% at investment points (i.e., for investing firms).
In addition, we perform capital structure regressions on simulated data and
show that the model can replicate stylized facts established by empirical
research. In the spirit of Strebulaev (2007), simulation of the multistage model
of corporate investment and financing dynamics reinforces the need to differentiate investment points from other data points when interpreting coefficient
estimates for market-to-book or profitability in a dynamic world. Consistent
with Frank and Goyal (2009) and others, we find leverage is negatively related
to the risk of cash flows, the cost of bankruptcy, and market-to-book, but
positively related the size of the firm and the tax rate. Finally, we document that real frictions in a dynamic model can produce
average leverage paths that closely resemble the ones in the data (e.g.,
Lemmon, Roberts, and Zender 2008).3 That is, endogenous investment and
financing decisions in a dynamic model can largely explain the otherwise
puzzling patterns that, despite of some convergence, average leverage ratios
across portfolios are fairly stable over time for both types of sorts (i.e., actual
and unexpected leverage) performed by these authors.4 To do so, we extend
the multistage model to randomly imposed initial variation in leverage. If
model firms are “born” with high (low) leverage ratios at the beginning,
then they maintain their relatively high (low) levels for over two decades
(despite of the fact that leverage ratios converge somewhat to more moderate
levels over time). This result illustrates that corporations, which know the
structure of their investment processes, take it into account and make decisions on debt usage accordingly. This leads to fairly stable leverage ratios, and
serves in the simulations as an important, unobserved determinant of the
permanent component of leverage.
The simplicity of our model allows us to develop a deeper understanding of
related results of the growing literature that extends Leland (1994) to interactions between investment and financing decisions.5 Hackbarth and Mauer’s
(2012) novel modeling feature is the explicit recognition that the firm’s existing capital structure influences future investment decisions through debtequity (agency) conflicts as well as financing mix of future investment.
Like Sundaresan, Wang, and Yang (2015), we extend their model to multiple
investment projects. While firms in Sundaresan et al. exhibit identical leverage
ratios when the last option is exercised, final leverage ratios of our model
firms vary widely.6 Titman and Tsyplakov (2007) numerically solve a complex model that features both financing and investment decision evolving over
time. In contrast to, for example, Sundaresan et al. their model is based on
continuous investment decisions (as in Brennan and Schwartz 1984), whereas
our model focuses on discrete, irreversible, or lumpy investment that is equivalent to a “real transaction” cost so that the firm does not continuously invest
or refinance. | Your response to the user must only use the information provided in the prompt context. Focus on terms and definitions whenever possible, explaining complex concepts using short sentences. Do not use vocabulary that requires advanced education to know, and if complex terms are used, they need to be defined in-place using phrasing a person with a high-school degree would understand. Limit your response to 100 words maximum, and answer the user's question with a well-organized list if they ask for definitions. If you cannot answer the user's question using the context alone, respond simply with "I do not have enough context to provide an answer at this time."
Context: Since Modigliani and Miller (1958), economists have relaxed many of their
assumptions to understand the observed behavior of leverage ratios. Arguably,
the trade-off theory has emerged as one of the leading paradigms, even though
it has been often challenged by empirical tests that appear to favor other
theories or suggest taxes are not that important. Therefore, there is still no
consensus in the literature. Moreover, none of the extant theories jointly
address the following questions in a parsimonious and simple framework:
(1) why firms tend to use debt financing so conservatively, (2) whether there
is indeed a target leverage ratio and partial adjustment toward it, (3) why the leverage-growth relation is negative, and (4) why average leverage paths
persist for over two decades.
To answers these questions, we extend Hackbarth and Mauer (2012) to
multiple financing and investment decisions that maximize initial value. We
develop two versions of a dynamic model with endogenous financing and
investment decisions. While the multistage model features two sequentially
exercisable investment options, the single-stage model has only one investment option. The single-stage model serves as a benchmark to gauge
investment-financing interactions in the multistage model. In both versions,
the capital expenditure is funded by a mix of debt and equity. This mixture not
only trades off tax benefits of debt against bankruptcy costs (triggered by an
endogenous default decision) but also recognizes financial flexibility in the
multistage model.
The solution of the dynamic model offers a rich set of novel predictions that
link the behavior of a firm’s leverage ratios to its investment opportunities.
First, a role for financial flexibility emerges endogenously because dynamic
financing-investment interactions between stages lead to an “intertemporal
effect” in the multistage model: reaping investment (i.e., cash flow) and tax
benefits sooner by issuing more debt in the first stage to fund the investment
cost reduces financial flexibility for funding more of the investment cost with
debt in the second stage. In comparison to the single-stage model, firms
underutilize debt in the multistage model when financing investment the first
time to retain financial flexibility. Because both debt issues jointly optimize
initial equity value (and hence internalize dilutive externalities on each other),
underutilization of debt persists when firms mature (i.e., exercise their last
investment options), and underutilization is more (less) severe for more backloaded (front-loaded) investment opportunities. It is worth noting that leverage
does not vary with investment in the single-stage model. Only in the multistage model leverage dynamics crucially hinge on the structure of the investment process in that it creates significant variation target leverage ratios.1
Second, optimizing behavior by firms in a dynamic trade-off model with
investment generates a significant fraction of low or zero leverage firms and
path-dependent, persistent leverage ratios. Our analysis shows how incentives
to retain financial flexibility in the first stage crucially depend on the structure
of the investment process. Given the wide range of optimal target leverage
ratios, the model suggests that leverage ratios can greatly vary depending on
how the firm grows assets-in-place by exercising its real options.
Third, structural models without dynamic financing-investment interactions
(1) overestimate target leverage ratios, and (2) can be misleading in that they
imply a fixed target leverage ratio that is largely taken to be exogenous to the
investment process. It thus seems difficult to determine target leverage in the conventional sense. This also suggests that there is no meaningful measurement of partial adjustment toward target leverage (as in, e.g., Flannery and
Rangan 2006) without recognizing the structure of the investment process.2
To test the model’s ability to match observed outcomes, we estimate key
model parameters via simulated method of moments (SMM). Intuitively,
SMM finds the set of parameters, which minimizes the difference of the
simulated model moments and the data moments from COMPUSTAT’s
annual tapes for the period of 1965 to 2009. We then split the full sample
into low, medium, and high market-to-book (or q) subsamples and employ
SMM also to fit the four parameters for each subsample. We split the sample
based on q to proxy for investment opportunities. Low q firms tend to have
fewer investment opportunities, whereas high q firms tend to have more
investment opportunities. Therefore, the relative value of q is informative
about the structure of the investment process in the real data. Our estimation
results reveal that high q firms have the most back-loaded investment processes, and low q firms have the most front-loaded ones.
Graham (2000) reports that firms, even stable and profitable, use less debt
than predicted by the static view of the tax benefits of debt. Two of five firms
have an average leverage ratio of less than 20%, and the median firm uses
only 31.4% leverage over the 1965 to 2000 period, which implies a “low
leverage puzzle.” More recently, Strebulaev and Yang (2013) find that on
average 10% of firms have zero leverage and almost 22% firms have less
than 5% quasi-market leverage, which represents a “zero leverage puzzle.”
We emphasize the importance of real frictions in a dynamic trade-off model
and thereby provide an economically meaningful mechanism for why firms
tend to use debt financing so conservatively. Based on the structural estimation results for the full sample, the simulated economies feature a significant
fraction of low (and zero) leverage firms. Moreover, in contrast to much
higher point estimates in prior studies, we report, on average, 20% leverage
in dynamics (i.e., for all firms) and 19% at investment points (i.e., for investing firms).
In addition, we perform capital structure regressions on simulated data and
show that the model can replicate stylized facts established by empirical
research. In the spirit of Strebulaev (2007), simulation of the multistage model
of corporate investment and financing dynamics reinforces the need to differentiate investment points from other data points when interpreting coefficient
estimates for market-to-book or profitability in a dynamic world. Consistent
with Frank and Goyal (2009) and others, we find leverage is negatively related
to the risk of cash flows, the cost of bankruptcy, and market-to-book, but
positively related the size of the firm and the tax rate. Finally, we document that real frictions in a dynamic model can produce
average leverage paths that closely resemble the ones in the data (e.g.,
Lemmon, Roberts, and Zender 2008).3 That is, endogenous investment and
financing decisions in a dynamic model can largely explain the otherwise
puzzling patterns that, despite of some convergence, average leverage ratios
across portfolios are fairly stable over time for both types of sorts (i.e., actual
and unexpected leverage) performed by these authors.4 To do so, we extend
the multistage model to randomly imposed initial variation in leverage. If
model firms are “born” with high (low) leverage ratios at the beginning,
then they maintain their relatively high (low) levels for over two decades
(despite of the fact that leverage ratios converge somewhat to more moderate
levels over time). This result illustrates that corporations, which know the
structure of their investment processes, take it into account and make decisions on debt usage accordingly. This leads to fairly stable leverage ratios, and
serves in the simulations as an important, unobserved determinant of the
permanent component of leverage.
The simplicity of our model allows us to develop a deeper understanding of
related results of the growing literature that extends Leland (1994) to interactions between investment and financing decisions.5 Hackbarth and Mauer’s
(2012) novel modeling feature is the explicit recognition that the firm’s existing capital structure influences future investment decisions through debtequity (agency) conflicts as well as financing mix of future investment.
Like Sundaresan, Wang, and Yang (2015), we extend their model to multiple
investment projects. While firms in Sundaresan et al. exhibit identical leverage
ratios when the last option is exercised, final leverage ratios of our model
firms vary widely.6 Titman and Tsyplakov (2007) numerically solve a complex model that features both financing and investment decision evolving over
time. In contrast to, for example, Sundaresan et al. their model is based on
continuous investment decisions (as in Brennan and Schwartz 1984), whereas
our model focuses on discrete, irreversible, or lumpy investment that is equivalent to a “real transaction” cost so that the firm does not continuously invest
or refinance.
Which model showed debt underutilization in this study? |
Response must not be more than 150 words.
Response must be in bullet points.
Model must only respond using information contained in the context block
Model must not rely on its own knowledge or outside sources of information when responding. | What methods does the NYSDOH AIDS Institute suggest HIV doctors use to keep their HIV-positive patients consistently engaged in their medical care? | NYSDOH AIDS Institute Linkage and Retention Workgroup Updated 6.20.2019 HIV Medical Providers: Strategies and Resources for Retention in Care The purpose of this document is to provide resources and information to support HIV health care practitioners’ efforts to retain HIV-positive people in medical care. Ensuring that people with HIV have access to HIV primary care is a cornerstone of both New York State’s Ending the Epidemic Blueprint and the National HIV/AIDS Strategy. Persons engaged in health care have better health outcomes such as improved viral suppression, which helps patients live longer and healthier lives and avoid transmission of the virus. On April 1, 2014, Public Health Law Section 2135 was amended to promote linkage and retention in care for HIV-positive persons. The law allows the New York State Department of Health (NYSDOH) and New York City Department of Health and Mental Hygiene (NYC DOHMH) to share information with health care providers for purposes of patient linkage and retention in care. The NYSDOH AIDS Institute recommends that health care providers take a multi-pronged approach to support their patients’ retention in care, including but not limited to the following: Have a proactive patient plan: Do not wait for a lapse in care to discuss what to do if the patient becomes lost-to-care. ▪ Create a patient-centered atmosphere, where all members of medical care teams (e.g., reception staff, phlebotomists, medical providers, etc.) promote patient engagement, linkage, and retention in care. ▪ When acceptable to patients, expand authorization dates on Authorization for Release of Health Information and Confidential HIV-Related Information forms (DOH-2557) to at least 2 years. Extending consent timeframes allows collaboration across sectors. ▪ Have DOH-2557 consent forms on file for every patient. This will permit you to contact community based organizations (CBOs) and others in the event of a lapse in care. Examples of CBOs that can help return patients to care include but are not limited to: HIV/AIDS CBOs; Health Homes and their downstream providers; food and nutrition programs; shelters; substance use treatment facilities; housing providers; mental health providers; prenatal care providers, etc. ▪ Encourage patients to add your practice’s name to any releases they sign with other organizations. ▪ Work with patients to update releases prior to when the releases expire (if applicable). ▪ Become a member of your area’s Health Home network(s) if you have not already done so. o for more information go to: https://www.health.ny.gov/health_care/medicaid/program/medicaid_health_homes/hh_map/index.htm Leverage existing resources for patient re-engagement. ▪ Use information from the Regional Health Information Organization (RHIO), if available, to determine if the patient is in care with another provider or if updated personal contact information is available. ▪ Conduct a health insurance benefits check, if available, on the patient to determine if s/he changed insurance or is in care with another provider. ▪ If the patient is in a Managed Care plan, the plan will have updated contact information, recent use of care, and medications on file. If this is a Medicaid Managed Care Plan, the plan can identify which Health Home the patient may be enrolled in and this information may be useful to your follow-up efforts. o If your patient is enrolled in a Health Home and has signed a release, contact the Health Home to determine whether the patient is actively enrolled. If yes, request assistance to contact or re-engage the patient in care. o If your patient has Medicaid but has not been enrolled in a Health Home, contact the Health Home to make an “upstream referral.” The patient will be referred to a provider who may conduct outreach to the patient’s home. ▪ Try multiple modes of contact (phone, text, letter, email, and social media) at varying times of the day/week to reach the patient (special consideration for social media sites – contact patient from an agency social media account and not a staff person’s personal account). ▪ If your patient uses other services within the facility (e.g., WIC, dental, child’s provider), place an alert on the Electronic Medical Record (EMR) to reconnect to the HIV Primary Care Provider and, if pregnant, to her prenatal care provider. ▪ As authorized in patient releases and/or medical charts, work with emergency contacts and other agencies/providers to determine whether they have had recent patient contact. ▪ Conduct a home visit if resources allow. If you have a peer program, utilize peers to provide outreach to the patient’s home. NYSDOH AIDS Institute Linkage and Retention Workgroup Updated 6.20.2019 Use external systems to expand your search when you cannot find a patient. ▪ Review public records such as: o Property tax rolls, municipal tax rolls, etc.: http://publicrecords.onlinesearches.com/NewYork.htm o Parole Lookup: http://www.doccs.ny.gov/lookup.html o NYS County Jail inmate lookup: https://vinelink.vineapps.com/login/NY o NYC Department of Corrections inmate lookup: http://www1.nyc.gov/site/doc/inmateinfo/inmate-lookup.page o NYS Department of Corrections and Community Supervision Inmate lookup: http://nysdoccslookup.doccs.ny.gov/ o Consider using people search engines, local newspapers, and police blotters. ▪ Social Security Death Master File Portal: https://www.npcrcss.cdc.gov/ssdi/ (A user ID and password are required to access the site and may be obtained by calling (301) 572-0502.) Pregnant women and exposed infants lost-to-care require immediate action for reengagement. HIV-positive pregnant women and their exposed infants are a priority when identified as lost-to-care and require immediate action for re-engagement. Reengagement in care is especially important for HIV-positive pregnant women who are in their third trimester due to possible increasing viral loads from being non-adherent to ART, leading to increased risk of transmitting HIV to their infants. Ensuring exposed infants are engaged in care is critical during the first 4-6 months to ensure appropriate antiretroviral and opportunistic infection prophylaxis, as well as definitive documentation of the infant’s HIV infection status. If routine attempts for reengagement of the HIV-positive pregnant woman or her exposed or infected infant(s) are not successful, please contact the NYSDOH Perinatal HIV Prevention Program at (518) 486-6048 or submit a request via the NYSDOH HIV/AIDS Provider Portal (see below) for assistance. NYC providers should call the NYC DOHMH Field Services Unit at (347) 396-7601 for assistance with reengagement of pregnant women. NYC-based providers (located within the 5 boroughs): Eligible NYC providers with patients who have been out-of-care for 6 months or longer can use the NYC DOHMH’s HIV Care Status Reports System (CSR) to obtain information on patients’ current care status in NYC. Information from the CSR may be useful to your follow-up efforts. For more information, see https://www1.nyc.gov/site/doh/health/health-topics/aids-hiv-care-status-reports-system.page Eligible NYC providers may also call the NYC DOHMH Provider Call Line at (212) 442-3388 to obtain information that may help link or retain patients in care. For providers based in NYS outside of NYC: After exploring the investigation tools and strategies listed above and if patient follow-up is warranted, the Bureau of HIV/AIDS Epidemiology (BHAE) may be able to provide information regarding a patient’s care status through the NYSDOH HIV/AIDS Provider Portal. The HIV/AIDS Provider Portal is an electronic system which enables clinicians to: 1) meet their reporting requirements electronically; 2) provide a mechanism for clinicians statewide to notify the NYS DOH that a patient needs linkage to Health Department Partner Services; and, 3) submit inquiries for patients with diagnosed HIV infection who are thought to be in need of assistance with linkage to or retention in HIV medical care. A NYSDOH Health Commerce System (HCS) Medical Professionals account is required. To apply for an HCS Medial Professions account, navigate to: https://apps.health.ny.gov/pub/top.html. After logging into the HCS at https://commerce.health.ny.gov/, select “Refresh My Applications List” on the left side and then under “My Applications” select HIV/AIDS Provider Portal. Follow the prompts to set up an account. Urgent requests will be responded to within 1 business day. For routine requests to the HIV/AIDS Provider Portal, the turn-around time is typically within 1-3 business days. | Response must not be more than 150 words.
Response must be in bullet points.
Model must only respond using information contained in the context block
Model must not rely on its own knowledge or outside sources of information when responding.
What methods does the NYSDOH AIDS Institute suggest HIV doctors use to keep their HIV-positive patients consistently engaged in their medical care?
NYSDOH AIDS Institute Linkage and Retention Workgroup Updated 6.20.2019 HIV Medical Providers: Strategies and Resources for Retention in Care The purpose of this document is to provide resources and information to support HIV health care practitioners’ efforts to retain HIV-positive people in medical care. Ensuring that people with HIV have access to HIV primary care is a cornerstone of both New York State’s Ending the Epidemic Blueprint and the National HIV/AIDS Strategy. Persons engaged in health care have better health outcomes such as improved viral suppression, which helps patients live longer and healthier lives and avoid transmission of the virus. On April 1, 2014, Public Health Law Section 2135 was amended to promote linkage and retention in care for HIV-positive persons. The law allows the New York State Department of Health (NYSDOH) and New York City Department of Health and Mental Hygiene (NYC DOHMH) to share information with health care providers for purposes of patient linkage and retention in care. The NYSDOH AIDS Institute recommends that health care providers take a multi-pronged approach to support their patients’ retention in care, including but not limited to the following: Have a proactive patient plan: Do not wait for a lapse in care to discuss what to do if the patient becomes lost-to-care. ▪ Create a patient-centered atmosphere, where all members of medical care teams (e.g., reception staff, phlebotomists, medical providers, etc.) promote patient engagement, linkage, and retention in care. ▪ When acceptable to patients, expand authorization dates on Authorization for Release of Health Information and Confidential HIV-Related Information forms (DOH-2557) to at least 2 years. Extending consent timeframes allows collaboration across sectors. ▪ Have DOH-2557 consent forms on file for every patient. This will permit you to contact community based organizations (CBOs) and others in the event of a lapse in care. Examples of CBOs that can help return patients to care include but are not limited to: HIV/AIDS CBOs; Health Homes and their downstream providers; food and nutrition programs; shelters; substance use treatment facilities; housing providers; mental health providers; prenatal care providers, etc. ▪ Encourage patients to add your practice’s name to any releases they sign with other organizations. ▪ Work with patients to update releases prior to when the releases expire (if applicable). ▪ Become a member of your area’s Health Home network(s) if you have not already done so. o for more information go to: https://www.health.ny.gov/health_care/medicaid/program/medicaid_health_homes/hh_map/index.htm Leverage existing resources for patient re-engagement. ▪ Use information from the Regional Health Information Organization (RHIO), if available, to determine if the patient is in care with another provider or if updated personal contact information is available. ▪ Conduct a health insurance benefits check, if available, on the patient to determine if s/he changed insurance or is in care with another provider. ▪ If the patient is in a Managed Care plan, the plan will have updated contact information, recent use of care, and medications on file. If this is a Medicaid Managed Care Plan, the plan can identify which Health Home the patient may be enrolled in and this information may be useful to your follow-up efforts. o If your patient is enrolled in a Health Home and has signed a release, contact the Health Home to determine whether the patient is actively enrolled. If yes, request assistance to contact or re-engage the patient in care. o If your patient has Medicaid but has not been enrolled in a Health Home, contact the Health Home to make an “upstream referral.” The patient will be referred to a provider who may conduct outreach to the patient’s home. ▪ Try multiple modes of contact (phone, text, letter, email, and social media) at varying times of the day/week to reach the patient (special consideration for social media sites – contact patient from an agency social media account and not a staff person’s personal account). ▪ If your patient uses other services within the facility (e.g., WIC, dental, child’s provider), place an alert on the Electronic Medical Record (EMR) to reconnect to the HIV Primary Care Provider and, if pregnant, to her prenatal care provider. ▪ As authorized in patient releases and/or medical charts, work with emergency contacts and other agencies/providers to determine whether they have had recent patient contact. ▪ Conduct a home visit if resources allow. If you have a peer program, utilize peers to provide outreach to the patient’s home. NYSDOH AIDS Institute Linkage and Retention Workgroup Updated 6.20.2019 Use external systems to expand your search when you cannot find a patient. ▪ Review public records such as: o Property tax rolls, municipal tax rolls, etc.: http://publicrecords.onlinesearches.com/NewYork.htm o Parole Lookup: http://www.doccs.ny.gov/lookup.html o NYS County Jail inmate lookup: https://vinelink.vineapps.com/login/NY o NYC Department of Corrections inmate lookup: http://www1.nyc.gov/site/doc/inmateinfo/inmate-lookup.page o NYS Department of Corrections and Community Supervision Inmate lookup: http://nysdoccslookup.doccs.ny.gov/ o Consider using people search engines, local newspapers, and police blotters. ▪ Social Security Death Master File Portal: https://www.npcrcss.cdc.gov/ssdi/ (A user ID and password are required to access the site and may be obtained by calling (301) 572-0502.) Pregnant women and exposed infants lost-to-care require immediate action for reengagement. HIV-positive pregnant women and their exposed infants are a priority when identified as lost-to-care and require immediate action for re-engagement. Reengagement in care is especially important for HIV-positive pregnant women who are in their third trimester due to possible increasing viral loads from being non-adherent to ART, leading to increased risk of transmitting HIV to their infants. Ensuring exposed infants are engaged in care is critical during the first 4-6 months to ensure appropriate antiretroviral and opportunistic infection prophylaxis, as well as definitive documentation of the infant’s HIV infection status. If routine attempts for reengagement of the HIV-positive pregnant woman or her exposed or infected infant(s) are not successful, please contact the NYSDOH Perinatal HIV Prevention Program at (518) 486-6048 or submit a request via the NYSDOH HIV/AIDS Provider Portal (see below) for assistance. NYC providers should call the NYC DOHMH Field Services Unit at (347) 396-7601 for assistance with reengagement of pregnant women. NYC-based providers (located within the 5 boroughs): Eligible NYC providers with patients who have been out-of-care for 6 months or longer can use the NYC DOHMH’s HIV Care Status Reports System (CSR) to obtain information on patients’ current care status in NYC. Information from the CSR may be useful to your follow-up efforts. For more information, see https://www1.nyc.gov/site/doh/health/health-topics/aids-hiv-care-status-reports-system.page Eligible NYC providers may also call the NYC DOHMH Provider Call Line at (212) 442-3388 to obtain information that may help link or retain patients in care. For providers based in NYS outside of NYC: After exploring the investigation tools and strategies listed above and if patient follow-up is warranted, the Bureau of HIV/AIDS Epidemiology (BHAE) may be able to provide information regarding a patient’s care status through the NYSDOH HIV/AIDS Provider Portal. The HIV/AIDS Provider Portal is an electronic system which enables clinicians to: 1) meet their reporting requirements electronically; 2) provide a mechanism for clinicians statewide to notify the NYS DOH that a patient needs linkage to Health Department Partner Services; and, 3) submit inquiries for patients with diagnosed HIV infection who are thought to be in need of assistance with linkage to or retention in HIV medical care. A NYSDOH Health Commerce System (HCS) Medical Professionals account is required. To apply for an HCS Medial Professions account, navigate to: https://apps.health.ny.gov/pub/top.html. After logging into the HCS at https://commerce.health.ny.gov/, select “Refresh My Applications List” on the left side and then under “My Applications” select HIV/AIDS Provider Portal. Follow the prompts to set up an account. Urgent requests will be responded to within 1 business day. For routine requests to the HIV/AIDS Provider Portal, the turn-around time is typically within 1-3 business days. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | can you summarize this text in maximum half the words it used, but with the same number of sections that the original text has. Keep the language simple and be sure to define key terms. | 1.7.0 Disentitlement
Under the EI legislation, the term "disentitlement" has a specific meaning and refers to the situations described below (EI Act 6(1)). Disentitlements are imposed for something the claimant has failed to do to prove entitlement to benefits (EI Act 49(1)), for example, failed to provide information that is crucial to determining if they are entitled to benefits, failed to prove their availability for work, failed to prove they are unemployed.
One or more disentitlements can be imposed concurrently, when there is more than one ground for disentitlement.
1.7.1 Effect of disentitlement
Disentitlements are imposed for as little as one day, or for an indefinite period of time. In practice, a disentitlement may be imposed on any working day of the week, and continue as long as the situation that led to the disentitlement remains unchanged. If or when the claimant’s situation changes, a decision must be made as to whether the disentitlement can be terminated or rescinded completely.
Benefits are not paid, or deemed to be paid, for any days of disentitlement. When a disentitlement covers a full week, it will delay the payment of benefits, and will not reduce the maximum number of weeks that could potentially be paid in the claimant’s benefit period. However, once the benefit period terminates (52 weeks plus any extensions is reached), no further benefits can be paid in that benefit period (Digest 1.4.4). This may mean that a lengthy period of disentitlement, similar to a lengthy period during which earnings are allocated, may in fact, reduce the number of weeks of benefits actually paid to a claimant (CUB 76507).
1.7.2 Grounds for disentitlement
Some of the following situations automatically result in disentitlement whereas others are not specifically defined:
working a full week (Digest Chapter 4)
not available for work (EI Act 18(a); Digest Chapter 10)
failure to prove incapacity for work in the case of sickness benefits (EI Act 18(b); EI Regulation 40)
minor attachment claimant who ceased work due to incapacity (EI Act 21(1))
loss of employment or inability to resume a previous employment by reason of a labour dispute (EI Act 36); EI Regulation 52; Digest Chapter 8)
confinement in a prison or similar institution (EI Act 37(a); EI Regulation 54; Digest 10.11.7)
being out of Canada (EI Act 37(b); EI Regulation 55; Digest 10.11.8)
non-entitlement of a teacher during the non-teaching period; (EI Regulation 33; Digest Chapter 14)
delay in making a renewal or continuing claim (EI Act 50(4); EI Regulation 26(1))
failure to provide information upon request (EI Act 50(5 & 6))
suspension from employment because of misconduct (EI Act 31; Digest Chapter 7)
voluntarily taking a leave from employment without just cause (EI Act 32; Digest Chapter 6)
voluntarily leaving employment permanently without just cause, or losing employment by reason of their misconduct, within three weeks of termination from that employment (EI Act 33(1); Digest Chapter 6, Digest Chapter 7)
not entitled to compassionate care benefits (EI Act 23.1; Digest Chapter 23)
not entitled to family caregiver benefits (EI Act 23.2; EI Act 23.3; Digest Chapter 22)
having received or being entitled to receive provincial benefits in respect of the birth or adoption of a child under a provincial plan (EI Regulation 76.09; Quebec Parental Insurance Plan)
Each of the above grounds will be discussed in detail in subsequent chapters.
1.7.3 Length of disentitlement
The legislation does not provide for a half-day disentitlement (EI Act 20). When it is determined that a disentitlement is warranted, the disentitlement must be applied for a minimum of one day.
Extenuating circumstances cannot reduce a period of disentitlement; either the claimant meets the entitlement condition or they do not. The reason they may not meet it is not a factor to consider when determining if the condition is met. The start date of the disentitlement may be determined ahead of time, for example in cases where a claimant intends to be absent from Canada for vacation. However, the end of the absence may not always be known. If known, the end date of the disentitlement will be input, and the claimant is not required to contact the Commission upon return, unless there is a change to the end date. If the end date is not known, the claimant must contact the Commission upon their return, to have the end date of the disentitlement reviewed.
An ongoing disentitlement may be imposed for less than five days each week. This may be the case, for example, when the disentitlement is related to the availability or capability for work of a claimant, or in the case of a labour dispute.
Unless the disentitlement can be suspended, as in the case of labour dispute (Digest 8.10.0), a disentitlement continues for as long as the condition leading to the disentitlement continues to exist. However, a new ground for disentitlement requires a separate decision. | [question]
can you summarize this text in maximum half the words it used, but with the same number of sections that the original text has. Keep the language simple and be sure to define key terms.
=====================
[text]
1.7.0 Disentitlement
Under the EI legislation, the term "disentitlement" has a specific meaning and refers to the situations described below (EI Act 6(1)). Disentitlements are imposed for something the claimant has failed to do to prove entitlement to benefits (EI Act 49(1)), for example, failed to provide information that is crucial to determining if they are entitled to benefits, failed to prove their availability for work, failed to prove they are unemployed.
One or more disentitlements can be imposed concurrently, when there is more than one ground for disentitlement.
1.7.1 Effect of disentitlement
Disentitlements are imposed for as little as one day, or for an indefinite period of time. In practice, a disentitlement may be imposed on any working day of the week, and continue as long as the situation that led to the disentitlement remains unchanged. If or when the claimant’s situation changes, a decision must be made as to whether the disentitlement can be terminated or rescinded completely.
Benefits are not paid, or deemed to be paid, for any days of disentitlement. When a disentitlement covers a full week, it will delay the payment of benefits, and will not reduce the maximum number of weeks that could potentially be paid in the claimant’s benefit period. However, once the benefit period terminates (52 weeks plus any extensions is reached), no further benefits can be paid in that benefit period (Digest 1.4.4). This may mean that a lengthy period of disentitlement, similar to a lengthy period during which earnings are allocated, may in fact, reduce the number of weeks of benefits actually paid to a claimant (CUB 76507).
1.7.2 Grounds for disentitlement
Some of the following situations automatically result in disentitlement whereas others are not specifically defined:
working a full week (Digest Chapter 4)
not available for work (EI Act 18(a); Digest Chapter 10)
failure to prove incapacity for work in the case of sickness benefits (EI Act 18(b); EI Regulation 40)
minor attachment claimant who ceased work due to incapacity (EI Act 21(1))
loss of employment or inability to resume a previous employment by reason of a labour dispute (EI Act 36); EI Regulation 52; Digest Chapter 8)
confinement in a prison or similar institution (EI Act 37(a); EI Regulation 54; Digest 10.11.7)
being out of Canada (EI Act 37(b); EI Regulation 55; Digest 10.11.8)
non-entitlement of a teacher during the non-teaching period; (EI Regulation 33; Digest Chapter 14)
delay in making a renewal or continuing claim (EI Act 50(4); EI Regulation 26(1))
failure to provide information upon request (EI Act 50(5 & 6))
suspension from employment because of misconduct (EI Act 31; Digest Chapter 7)
voluntarily taking a leave from employment without just cause (EI Act 32; Digest Chapter 6)
voluntarily leaving employment permanently without just cause, or losing employment by reason of their misconduct, within three weeks of termination from that employment (EI Act 33(1); Digest Chapter 6, Digest Chapter 7)
not entitled to compassionate care benefits (EI Act 23.1; Digest Chapter 23)
not entitled to family caregiver benefits (EI Act 23.2; EI Act 23.3; Digest Chapter 22)
having received or being entitled to receive provincial benefits in respect of the birth or adoption of a child under a provincial plan (EI Regulation 76.09; Quebec Parental Insurance Plan)
Each of the above grounds will be discussed in detail in subsequent chapters.
1.7.3 Length of disentitlement
The legislation does not provide for a half-day disentitlement (EI Act 20). When it is determined that a disentitlement is warranted, the disentitlement must be applied for a minimum of one day.
Extenuating circumstances cannot reduce a period of disentitlement; either the claimant meets the entitlement condition or they do not. The reason they may not meet it is not a factor to consider when determining if the condition is met. The start date of the disentitlement may be determined ahead of time, for example in cases where a claimant intends to be absent from Canada for vacation. However, the end of the absence may not always be known. If known, the end date of the disentitlement will be input, and the claimant is not required to contact the Commission upon return, unless there is a change to the end date. If the end date is not known, the claimant must contact the Commission upon their return, to have the end date of the disentitlement reviewed.
An ongoing disentitlement may be imposed for less than five days each week. This may be the case, for example, when the disentitlement is related to the availability or capability for work of a claimant, or in the case of a labour dispute.
Unless the disentitlement can be suspended, as in the case of labour dispute (Digest 8.10.0), a disentitlement continues for as long as the condition leading to the disentitlement continues to exist. However, a new ground for disentitlement requires a separate decision.
https://www.canada.ca/en/employment-social-development/programs/ei/ei-list/reports/digest/chapter-1/disentitlement.html
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Answer the questions using only the provided text, do not use any other outside sources for information. Any mention of a Supreme Court Justice by name should be in bold. | What specific concerns did the dissenting Supreme Court Justices have on this ruling? | JUSTICE BREYER, JUSTICE SOTOMAYOR, and JUSTICE
KAGAN, dissenting.
For half a century, Roe v. Wade, and
Planned Parenthood of Southeastern Pa. v. Casey, have protected the liberty and equality of
women. Roe held, and Casey reaffirmed, that the Constitution safeguards a woman’s right to decide for herself
whether to bear a child. Roe held, and Casey reaffirmed,
that in the first stages of pregnancy, the government could
not make that choice for women. The government could not
control a woman’s body or the course of a woman’s life: It
could not determine what the woman’s future would be. Respecting a
woman as an autonomous being, and granting her full
equality, meant giving her substantial choice over this most
personal and most consequential of all life decisions.
Roe and Casey well understood the difficulty and divisiveness of the abortion issue. The Court knew that Americans
hold profoundly different views about the “moral[ity]” of
“terminating a pregnancy, even in its earliest stage.” And the Court recognized that “the state has legitimate interests from the outset of the pregnancy in protecting” the “life of the fetus that may become
a child.” So the Court struck a balance, as it
often does when values and goals compete. It held that the
State could prohibit abortions after fetal viability, so long
as the ban contained exceptions to safeguard a woman’s life
or health. It held that even before viability, the State could
regulate the abortion procedure in multiple and meaningful
ways. But until the viability line was crossed, the Court
held, a State could not impose a “substantial obstacle” on a
woman’s “right to elect the procedure” as she (not the government) thought proper, in light of all the circumstances
and complexities of her own life. Ibid.
Today, the Court discards that balance. It says that from
the very moment of fertilization, a woman has no rights to
speak of. A State can force her to bring a pregnancy to term,
even at the steepest personal and familial costs. An abortion restriction, the majority holds, is permissible whenever
rational, the lowest level of scrutiny known to the law. And
because, as the Court has often stated, protecting fetal life
is rational, States will feel free to enact all manner of restrictions. The Mississippi law at issue here bars abortions
after the 15th week of pregnancy. Under the majority’s ruling, though, another State’s law could do so after ten weeks,
or five or three or one—or, again, from the moment of fertilization. States have already passed such laws, in anticipation of today’s ruling. More will follow. Some States have
enacted laws extending to all forms of abortion procedure,
including taking medication in one’s own home. They have
passed laws without any exceptions for when the woman is
the victim of rape or incest. Under those laws, a woman
will have to bear her rapist’s child or a young girl her father’s—no matter if doing so will destroy her life. So too,
after today’s ruling, some States may compel women to
carry to term a fetus with severe physical anomalies—for
example, one afflicted with Tay-Sachs disease, sure to die
within a few years of birth. States may even argue that a
prohibition on abortion need make no provision for protecting a woman from risk of death or physical harm. Across a
vast array of circumstances, a State will be able to impose
its moral choice on a woman and coerce her to give birth to
a child.
Enforcement of all these draconian restrictions will also
be left largely to the States’ devices. A State can of course
impose criminal penalties on abortion providers, including
lengthy prison sentences. But some States will not stop
there. Perhaps, in the wake of today’s decision, a state law
will criminalize the woman’s conduct too, incarcerating or
fining her for daring to seek or obtain an abortion. And as
Texas has recently shown, a State can turn neighbor
against neighbor, enlisting fellow citizens in the effort to
root out anyone who tries to get an abortion, or to assist
another in doing so.
Today’s decision, the majority says,
permits “each State” to address abortion as it pleases. That is cold comfort, of course, for the poor woman
who cannot get the money to fly to a distant State for a procedure. Above all others, women lacking financial resources will suffer from today’s decision. In any event, interstate restrictions will also soon be in the offing. After
this decision, some States may block women from traveling
out of State to obtain abortions, or even from receiving abortion medications from out of State. Some may criminalize
efforts, including the provision of information or funding, to
help women gain access to other States’ abortion services.
Most threatening of all, no language in today’s decision
stops the Federal Government from prohibiting abortions
nationwide, once again from the moment of conception and
without exceptions for rape or incest. If that happens, “the
views of [an individual State’s] citizens” will not matter. The challenge for a woman will be to finance a
trip not to “New York [or] California” but to Toronto.
Whatever the exact scope of the coming laws, one result
of today’s decision is certain: the curtailment of women’s
rights, and of their status as free and equal citizens. Yesterday, the Constitution guaranteed that a woman confronted with an unplanned pregnancy could (within reasonable limits) make her own decision about whether to bear a
child, with all the life-transforming consequences that act
involves. But no longer. As
of today, this Court holds, a State can always force a woman
to give birth, prohibiting even the earliest abortions. A
State can thus transform what, when freely undertaken, is
a wonder into what, when forced, may be a nightmare.
Some women, especially women of means, will find ways
around the State’s assertion of power. Others—those without money or childcare or the ability to take time off from
work—will not be so fortunate. Maybe they will try an unsafe method of abortion, and come to physical harm, or even
die. Maybe they will undergo pregnancy and have a child,
but at significant personal or familial cost. At the least,
they will incur the cost of losing control of their lives. The
Constitution will, today’s majority holds, provide no shield,
despite its guarantees of liberty and equality for all.
And no one should be confident that this majority is done
with its work. The right Roe and Casey recognized does not
stand alone. To the contrary, the Court has linked it for
decades to other settled freedoms involving bodily integrity,
familial relationships, and procreation. Most obviously, the
right to terminate a pregnancy arose straight out of the
right to purchase and use contraception. The majority (or to be
more accurate, most of it) is eager to tell us today that nothing it does “cast[s] doubt on precedents that do not concern
abortion.”
But how could that be? The lone rationale for
what the majority does today is that the right to elect an
abortion is not “deeply rooted in history”: Not until Roe, the
majority argues, did people think abortion fell within the
Constitution’s guarantee of liberty. The same
could be said, though, of most of the rights the majority
claims it is not tampering with. The majority could write
just as long an opinion showing, for example, that until the
mid-20th century, “there was no support in American law
for a constitutional right to obtain [contraceptives].” So one of two things must be true. Either the majority does not really believe in its own reasoning. Or if it does,
all rights that have no history stretching back to the mid19th century are insecure. Either the mass of the majority’s
opinion is hypocrisy, or additional constitutional rights are
under threat. It is one or the other.
One piece of evidence on that score seems especially salient: The majority’s cavalier approach to overturning this
Court’s precedents. Stare decisis is the Latin phrase for a
foundation stone of the rule of law: that things decided
should stay decided unless there is a very good reason for
change. It is a doctrine of judicial modesty and humility.
Those qualities are not evident in today’s opinion. The majority has no good reason for the upheaval in law and society
it sets off. Women have reliedon the availability of abortion both in structuring their relationships and in planning their lives. The legal framework Roe and Casey developed to balance the competing interests in this sphere has proved workable in courts across
the country. No recent developments, in either law or fact,
have eroded or cast doubt on those precedents. Nothing, in
short, has changed. | Answer the question below using only the provided text, do not use any other outside sources for information. Any mention of a Supreme Court Justice by name should be in bold.
What specific concerns did the dissenting Supreme Court Justices have on this ruling?
JUSTICE BREYER, JUSTICE SOTOMAYOR, and JUSTICE
KAGAN, dissenting.
For half a century, Roe v. Wade, and
Planned Parenthood of Southeastern Pa. v. Casey, have protected the liberty and equality of
women. Roe held, and Casey reaffirmed, that the Constitution safeguards a woman’s right to decide for herself
whether to bear a child. Roe held, and Casey reaffirmed,
that in the first stages of pregnancy, the government could
not make that choice for women. The government could not
control a woman’s body or the course of a woman’s life: It
could not determine what the woman’s future would be. Respecting a
woman as an autonomous being, and granting her full
equality, meant giving her substantial choice over this most
personal and most consequential of all life decisions.
Roe and Casey well understood the difficulty and divisiveness of the abortion issue. The Court knew that Americans
hold profoundly different views about the “moral[ity]” of
“terminating a pregnancy, even in its earliest stage.” And the Court recognized that “the state has legitimate interests from the outset of the pregnancy in protecting” the “life of the fetus that may become
a child.” So the Court struck a balance, as it
often does when values and goals compete. It held that the
State could prohibit abortions after fetal viability, so long
as the ban contained exceptions to safeguard a woman’s life
or health. It held that even before viability, the State could
regulate the abortion procedure in multiple and meaningful
ways. But until the viability line was crossed, the Court
held, a State could not impose a “substantial obstacle” on a
woman’s “right to elect the procedure” as she (not the government) thought proper, in light of all the circumstances
and complexities of her own life. Ibid.
Today, the Court discards that balance. It says that from
the very moment of fertilization, a woman has no rights to
speak of. A State can force her to bring a pregnancy to term,
even at the steepest personal and familial costs. An abortion restriction, the majority holds, is permissible whenever
rational, the lowest level of scrutiny known to the law. And
because, as the Court has often stated, protecting fetal life
is rational, States will feel free to enact all manner of restrictions. The Mississippi law at issue here bars abortions
after the 15th week of pregnancy. Under the majority’s ruling, though, another State’s law could do so after ten weeks,
or five or three or one—or, again, from the moment of fertilization. States have already passed such laws, in anticipation of today’s ruling. More will follow. Some States have
enacted laws extending to all forms of abortion procedure,
including taking medication in one’s own home. They have
passed laws without any exceptions for when the woman is
the victim of rape or incest. Under those laws, a woman
will have to bear her rapist’s child or a young girl her father’s—no matter if doing so will destroy her life. So too,
after today’s ruling, some States may compel women to
carry to term a fetus with severe physical anomalies—for
example, one afflicted with Tay-Sachs disease, sure to die
within a few years of birth. States may even argue that a
prohibition on abortion need make no provision for protecting a woman from risk of death or physical harm. Across a
vast array of circumstances, a State will be able to impose
its moral choice on a woman and coerce her to give birth to
a child.
Enforcement of all these draconian restrictions will also
be left largely to the States’ devices. A State can of course
impose criminal penalties on abortion providers, including
lengthy prison sentences. But some States will not stop
there. Perhaps, in the wake of today’s decision, a state law
will criminalize the woman’s conduct too, incarcerating or
fining her for daring to seek or obtain an abortion. And as
Texas has recently shown, a State can turn neighbor
against neighbor, enlisting fellow citizens in the effort to
root out anyone who tries to get an abortion, or to assist
another in doing so.
Today’s decision, the majority says,
permits “each State” to address abortion as it pleases. That is cold comfort, of course, for the poor woman
who cannot get the money to fly to a distant State for a procedure. Above all others, women lacking financial resources will suffer from today’s decision. In any event, interstate restrictions will also soon be in the offing. After
this decision, some States may block women from traveling
out of State to obtain abortions, or even from receiving abortion medications from out of State. Some may criminalize
efforts, including the provision of information or funding, to
help women gain access to other States’ abortion services.
Most threatening of all, no language in today’s decision
stops the Federal Government from prohibiting abortions
nationwide, once again from the moment of conception and
without exceptions for rape or incest. If that happens, “the
views of [an individual State’s] citizens” will not matter. The challenge for a woman will be to finance a
trip not to “New York [or] California” but to Toronto.
Whatever the exact scope of the coming laws, one result
of today’s decision is certain: the curtailment of women’s
rights, and of their status as free and equal citizens. Yesterday, the Constitution guaranteed that a woman confronted with an unplanned pregnancy could (within reasonable limits) make her own decision about whether to bear a
child, with all the life-transforming consequences that act
involves. But no longer. As
of today, this Court holds, a State can always force a woman
to give birth, prohibiting even the earliest abortions. A
State can thus transform what, when freely undertaken, is
a wonder into what, when forced, may be a nightmare.
Some women, especially women of means, will find ways
around the State’s assertion of power. Others—those without money or childcare or the ability to take time off from
work—will not be so fortunate. Maybe they will try an unsafe method of abortion, and come to physical harm, or even
die. Maybe they will undergo pregnancy and have a child,
but at significant personal or familial cost. At the least,
they will incur the cost of losing control of their lives. The
Constitution will, today’s majority holds, provide no shield,
despite its guarantees of liberty and equality for all.
And no one should be confident that this majority is done
with its work. The right Roe and Casey recognized does not
stand alone. To the contrary, the Court has linked it for
decades to other settled freedoms involving bodily integrity,
familial relationships, and procreation. Most obviously, the
right to terminate a pregnancy arose straight out of the
right to purchase and use contraception. The majority (or to be
more accurate, most of it) is eager to tell us today that nothing it does “cast[s] doubt on precedents that do not concern
abortion.”
But how could that be? The lone rationale for
what the majority does today is that the right to elect an
abortion is not “deeply rooted in history”: Not until Roe, the
majority argues, did people think abortion fell within the
Constitution’s guarantee of liberty. The same
could be said, though, of most of the rights the majority
claims it is not tampering with. The majority could write
just as long an opinion showing, for example, that until the
mid-20th century, “there was no support in American law
for a constitutional right to obtain [contraceptives].” So one of two things must be true. Either the majority does not really believe in its own reasoning. Or if it does,
all rights that have no history stretching back to the mid19th century are insecure. Either the mass of the majority’s
opinion is hypocrisy, or additional constitutional rights are
under threat. It is one or the other.
One piece of evidence on that score seems especially salient: The majority’s cavalier approach to overturning this
Court’s precedents. Stare decisis is the Latin phrase for a
foundation stone of the rule of law: that things decided
should stay decided unless there is a very good reason for
change. It is a doctrine of judicial modesty and humility.
Those qualities are not evident in today’s opinion. The majority has no good reason for the upheaval in law and society
it sets off. Women have relied on the availability of abortion both in structuring their relationships and in planning their lives. The legal framework Roe and Casey developed to balance the competing interests in this sphere has proved workable in courts across
the country. No recent developments, in either law or fact,
have eroded or cast doubt on those precedents. Nothing, in
short, has changed.
|
Craft your answer only using the information provided in the context block. Keep your answer under 200 words. | How many complaints were within OIG jurisdiction? | Section 1001 of the USA PATRIOT Act (Patriot Act), Public Law 107-56, directs the Office of the Inspector
General (OIG) of the U.S. Department of Justice (DOJ or Department) to undertake a series of actions related
to claims of civil rights or civil liberties violations allegedly committed by DOJ employees. It also requires the
OIG to provide semiannual reports to Congress on the implementation of the OIG’s responsibilities under
Section 1001. This report summarizes the OIG’s Section 1001-related activities from July 1, 2023, through
December 31, 2023.
Introduction
The OIG is an independent entity within DOJ that reports to both the Attorney General and Congress. The
OIG’s mission is to investigate allegations of waste, fraud, and abuse in DOJ programs and personnel, and to
promote economy and efficiency in DOJ operations.
The OIG has jurisdiction to review programs and personnel in all DOJ components, including the Federal
Bureau of Investigation (FBI), the Drug Enforcement Administration (DEA), the Federal Bureau of Prisons
(BOP), the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), the U.S. Marshals Service (USMS), and
the U.S. Attorneys’ Offices.1
The OIG consists of the Immediate Office of the Inspector General and the following divisions and offices:
• Audit Division conducts independent audits of Department programs, computer systems, financial
statements, and DOJ-awarded grants and contracts.
• Evaluation and Inspections Division conducts program and management reviews that involve on-site
inspections, statistical analysis, and other techniques to review Department programs and activities.
• Investigations Division investigates allegations of bribery, fraud, abuse, civil rights violations, and
violations of other criminal laws and administrative procedures that govern Department employees,
contractors, and grantees.
• Oversight and Review Division blends the skills of attorneys, investigators, and program analysts to
investigate or review high profile or sensitive matters involving Department programs or employees.
• Information Technology Division executes the OIG’s IT strategic vision and goals by directing
technology and business process integration, network administration, implementation of computer
hardware and software, cybersecurity, applications development, programming services, policy
formulation, and other mission-support activities.
Management and Planning Division provides the Inspector General with advice on administrative
and fiscal policy and assists OIG components by providing services in the areas of planning, budget,
finance, quality assurance, personnel, communications, procurement, facilities, telecommunications,
security, and general support.
• Office of General Counsel provides legal advice to OIG management and staff. In addition, the office
drafts memoranda on issues of law; prepares administrative subpoenas; represents the OIG in
personnel, contractual, and legal matters; and responds to Freedom of Information Act requests.
The OIG has a staff of approximately 500 employees, about half of whom are based in Washington, D.C.
The OIG has 28 Investigations Division field locations and 6 Audit Division regional offices located
throughout the country.
Section 1001 of the Patriot Act
Section 1001 of the Patriot Act provides the following:
The DOJ Inspector General shall designate one official who shall―
(1) review information and receive complaints alleging abuses of civil rights and civil liberties by DOJ
employees and officials;
(2) make public through the Internet, radio, television, and newspaper advertisements information on
the responsibilities and functions of, and how to contact, the official; and
(3) submit to the Committee on the Judiciary of the House of Representatives and the Committee on
the Judiciary of the Senate on a semiannual basis a report on the implementation of this subsection
and detailing any abuses described in paragraph (1), including a description of the use of funds
appropriations used to carry out this subsection.
Responsibilities, Functions, and Contact Information of the OIG’s Designated Section 1001
Official
The DOJ Inspector General has designated the OIG’s Assistant Inspector General for Investigations as the
official responsible for the duties required under Section 1001, which are described in the next section of
this report.
Civil Rights and Civil Liberties Complaints
Section 1001 requires the OIG to “review information and receive complaints alleging abuses of civil rights
and civil liberties by employees and officials of the Department of Justice.” While the phrase “civil rights and
civil liberties” is not specifically defined in the Patriot Act, the OIG has looked to the “Sense of Congress”
provisions in the statute, namely Sections 102 and 1002, for context. Sections 102 and 1002 identify certain
ethnic and religious groups who would be vulnerable to abuse due to a possible backlash from the terrorist
attacks of September 11, 2001, including Muslims, Arabs, Sikhs, and South Asians.
The OIG’s Investigations Division, which is headed by the Assistant Inspector General for Investigations,
manages the OIG’s Section 1001 investigative responsibilities. The two units with primary responsibility for
coordinating these activities are Operations Branch I and Operations Branch II, each of which is directed by
a Special Agent in Charge and two Assistant Special Agents in Charge. In addition, these units are
supported by Investigative Specialists and other staff assigned to the Hotline Operations Branch, who divide
their time between Section 1001 and other responsibilities.
The Investigations Division receives civil rights and civil liberties complaints via mail, email, telephone, and
fax. Upon receipt, Division Assistant Special Agents in Charge review the complaints and assign an initial
disposition to each matter, and Investigative Specialists enter the complaints alleging a violation within the
investigative jurisdiction of the OIG or another federal agency into an OIG database. Serious civil rights and
civil liberties allegations relating to actions of DOJ employees or contractors are typically assigned to an OIG
Investigations Division field office, where Special Agents conduct investigations of criminal violations and
administrative misconduct.
Given the number of complaints the OIG receives compared to its limited resources, the OIG does not
investigate all allegations of misconduct against DOJ employees. The OIG refers many complaints involving
DOJ employees to internal affairs offices in DOJ components such as the FBI Inspection Division, the DEA
Office of Professional Responsibility, and the BOP Office of Internal Affairs. In certain referrals, the OIG
requires the components to report the results of their investigations to the OIG. In most cases, the OIG
notifies the complainant of the referral.
Many complaints the OIG receives involve matters outside its jurisdiction. When those matters identify a
serious issue for investigation, such as a threat to life or safety, the OIG forwards them to the appropriate
investigative entity. In other cases, the complainant is directed to another investigative agency when
possible. Allegations related to the authority of a DOJ attorney to litigate, investigate, or provide legal advice are referred to the DOJ Office of Professional Responsibility. Allegations related solely to state and local law enforcement or government officials that raise a federal civil rights concern are forwarded to the DOJ Civil Rights Division.
When an allegation received from any source involves a potential violation of federal civil rights statutes by a
DOJ employee, the OIG discusses the complaint with the DOJ Civil Rights Division for possible prosecution.
In some cases, the Civil Rights Division accepts the case and requests additional investigation by either the
OIG or the FBI. In other cases, the Civil Rights Division declines prosecution and either the OIG or the
appropriate DOJ internal affairs office reviews the case for possible administrative misconduct.
Complaints Processed During This Reporting Period
Between July 1, 2023, and December 31, 2023, the period covered by this report, the OIG processed 739 new complaints that were identified by the complainant as civil rights or civil liberties complaints.
Of these complaints, 717 did not fall within the OIG’s jurisdiction or did not warrant further investigation.
These complaints involved allegations against agencies or entities outside the DOJ, including other federal
agencies, local governments, or private businesses, as well as allegations that were not suitable for
investigation by the OIG, and could not be or were not referred to another agency for investigation,
generally because the complaints failed to identify a subject or agency.
The OIG found that the remaining 22 of the 739 complaints it received involved DOJ employees or DOJ
components and included allegations that required further review. The OIG determined that 20 of these
complaints generally raised management issues unrelated to the OIG’s Section 1001 duties and referred
these complaints to DOJ components for appropriate handling. Examples of complaints in this category
included allegations by federal prisoners about the general prison conditions, and by others that the FBI did not initiate an investigation into particular allegations.
The OIG identified two complaints by federal prisoners that alleged a potential civil rights or civil liberties
violation under Section 1001 but, based on the lack of an identified subject or the non-specific nature of the allegations, determined that these complaints generally raised management issues. The OIG referred these complaints to BOP for appropriate handling.
| system instructions: [Craft your answer only using the information provided in the context block. Keep your answer under 200 words.]
question: [
How many complaints were within OIG jurisdiction?
]
context block: [Section 1001 of the USA PATRIOT Act (Patriot Act), Public Law 107-56, directs the Office of the Inspector
General (OIG) of the U.S. Department of Justice (DOJ or Department) to undertake a series of actions related
to claims of civil rights or civil liberties violations allegedly committed by DOJ employees. It also requires the
OIG to provide semiannual reports to Congress on the implementation of the OIG’s responsibilities under
Section 1001. This report summarizes the OIG’s Section 1001-related activities from July 1, 2023, through
December 31, 2023.
Introduction
The OIG is an independent entity within DOJ that reports to both the Attorney General and Congress. The
OIG’s mission is to investigate allegations of waste, fraud, and abuse in DOJ programs and personnel, and to
promote economy and efficiency in DOJ operations.
The OIG has jurisdiction to review programs and personnel in all DOJ components, including the Federal
Bureau of Investigation (FBI), the Drug Enforcement Administration (DEA), the Federal Bureau of Prisons
(BOP), the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), the U.S. Marshals Service (USMS), and
the U.S. Attorneys’ Offices.1
The OIG consists of the Immediate Office of the Inspector General and the following divisions and offices:
• Audit Division conducts independent audits of Department programs, computer systems, financial
statements, and DOJ-awarded grants and contracts.
• Evaluation and Inspections Division conducts program and management reviews that involve on-site
inspections, statistical analysis, and other techniques to review Department programs and activities.
• Investigations Division investigates allegations of bribery, fraud, abuse, civil rights violations, and
violations of other criminal laws and administrative procedures that govern Department employees,
contractors, and grantees.
• Oversight and Review Division blends the skills of attorneys, investigators, and program analysts to
investigate or review high profile or sensitive matters involving Department programs or employees.
• Information Technology Division executes the OIG’s IT strategic vision and goals by directing
technology and business process integration, network administration, implementation of computer
hardware and software, cybersecurity, applications development, programming services, policy
formulation, and other mission-support activities.
Management and Planning Division provides the Inspector General with advice on administrative
and fiscal policy and assists OIG components by providing services in the areas of planning, budget,
finance, quality assurance, personnel, communications, procurement, facilities, telecommunications,
security, and general support.
• Office of General Counsel provides legal advice to OIG management and staff. In addition, the office
drafts memoranda on issues of law; prepares administrative subpoenas; represents the OIG in
personnel, contractual, and legal matters; and responds to Freedom of Information Act requests.
The OIG has a staff of approximately 500 employees, about half of whom are based in Washington, D.C.
The OIG has 28 Investigations Division field locations and 6 Audit Division regional offices located
throughout the country.
Section 1001 of the Patriot Act
Section 1001 of the Patriot Act provides the following:
The DOJ Inspector General shall designate one official who shall―
(1) review information and receive complaints alleging abuses of civil rights and civil liberties by DOJ
employees and officials;
(2) make public through the Internet, radio, television, and newspaper advertisements information on
the responsibilities and functions of, and how to contact, the official; and
(3) submit to the Committee on the Judiciary of the House of Representatives and the Committee on
the Judiciary of the Senate on a semiannual basis a report on the implementation of this subsection
and detailing any abuses described in paragraph (1), including a description of the use of funds
appropriations used to carry out this subsection.
Responsibilities, Functions, and Contact Information of the OIG’s Designated Section 1001
Official
The DOJ Inspector General has designated the OIG’s Assistant Inspector General for Investigations as the
official responsible for the duties required under Section 1001, which are described in the next section of
this report.
Civil Rights and Civil Liberties Complaints
Section 1001 requires the OIG to “review information and receive complaints alleging abuses of civil rights
and civil liberties by employees and officials of the Department of Justice.” While the phrase “civil rights and
civil liberties” is not specifically defined in the Patriot Act, the OIG has looked to the “Sense of Congress”
provisions in the statute, namely Sections 102 and 1002, for context. Sections 102 and 1002 identify certain
ethnic and religious groups who would be vulnerable to abuse due to a possible backlash from the terrorist
attacks of September 11, 2001, including Muslims, Arabs, Sikhs, and South Asians.
The OIG’s Investigations Division, which is headed by the Assistant Inspector General for Investigations,
manages the OIG’s Section 1001 investigative responsibilities. The two units with primary responsibility for
coordinating these activities are Operations Branch I and Operations Branch II, each of which is directed by
a Special Agent in Charge and two Assistant Special Agents in Charge. In addition, these units are
supported by Investigative Specialists and other staff assigned to the Hotline Operations Branch, who divide
their time between Section 1001 and other responsibilities.
The Investigations Division receives civil rights and civil liberties complaints via mail, email, telephone, and
fax. Upon receipt, Division Assistant Special Agents in Charge review the complaints and assign an initial
disposition to each matter, and Investigative Specialists enter the complaints alleging a violation within the
investigative jurisdiction of the OIG or another federal agency into an OIG database. Serious civil rights and
civil liberties allegations relating to actions of DOJ employees or contractors are typically assigned to an OIG
Investigations Division field office, where Special Agents conduct investigations of criminal violations and
administrative misconduct.
Given the number of complaints the OIG receives compared to its limited resources, the OIG does not
investigate all allegations of misconduct against DOJ employees. The OIG refers many complaints involving
DOJ employees to internal affairs offices in DOJ components such as the FBI Inspection Division, the DEA
Office of Professional Responsibility, and the BOP Office of Internal Affairs. In certain referrals, the OIG
requires the components to report the results of their investigations to the OIG. In most cases, the OIG
notifies the complainant of the referral.
Many complaints the OIG receives involve matters outside its jurisdiction. When those matters identify a
serious issue for investigation, such as a threat to life or safety, the OIG forwards them to the appropriate
investigative entity. In other cases, the complainant is directed to another investigative agency when
possible. Allegations related to the authority of a DOJ attorney to litigate, investigate, or provide legal advice are referred to the DOJ Office of Professional Responsibility. Allegations related solely to state and local law enforcement or government officials that raise a federal civil rights concern are forwarded to the DOJ Civil Rights Division.
When an allegation received from any source involves a potential violation of federal civil rights statutes by a
DOJ employee, the OIG discusses the complaint with the DOJ Civil Rights Division for possible prosecution.
In some cases, the Civil Rights Division accepts the case and requests additional investigation by either the
OIG or the FBI. In other cases, the Civil Rights Division declines prosecution and either the OIG or the
appropriate DOJ internal affairs office reviews the case for possible administrative misconduct.
Complaints Processed During This Reporting Period
Between July 1, 2023, and December 31, 2023, the period covered by this report, the OIG processed 739 new complaints that were identified by the complainant as civil rights or civil liberties complaints.
Of these complaints, 717 did not fall within the OIG’s jurisdiction or did not warrant further investigation.
These complaints involved allegations against agencies or entities outside the DOJ, including other federal
agencies, local governments, or private businesses, as well as allegations that were not suitable for
investigation by the OIG, and could not be or were not referred to another agency for investigation,
generally because the complaints failed to identify a subject or agency.
The OIG found that the remaining 22 of the 739 complaints it received involved DOJ employees or DOJ
components and included allegations that required further review. The OIG determined that 20 of these
complaints generally raised management issues unrelated to the OIG’s Section 1001 duties and referred
these complaints to DOJ components for appropriate handling. Examples of complaints in this category
included allegations by federal prisoners about the general prison conditions, and by others that the FBI did not initiate an investigation into particular allegations.
The OIG identified two complaints by federal prisoners that alleged a potential civil rights or civil liberties
violation under Section 1001 but, based on the lack of an identified subject or the non-specific nature of the allegations, determined that these complaints generally raised management issues. The OIG referred these complaints to BOP for appropriate handling.]
|
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | I think Bitcoin is very interesting and want to buy some cryptocurrency but I have no idea of the best time. I live in CA and get paid at the first of the month. What is my best strategy? | What is the Best Time to Buy Cryptocurrency?
It’s traded 24 hours a day, 7 days a week by investors located all around the world. Cryptocurrency is a volatile market that can offer opportunities to make - or lose - significant sums of money
What is the Best Time to Buy Cryptocurrency?
There are many methods that equity investors use to decide when to execute a trade in the stock markets, but the same rules and trading patterns don’t always apply to cryptocurrency. It’s true that cryptocurrency buyers can make purchases within certain windows to get the best possible price. Still, the volatility of the cryptocurrency market makes it very difficult to identify reliable patterns and choose positions accordingly.
Unlike other assets, trading cryptocurrency has very low barriers to entry with tokens with a range of values. Rising inflation has also caused many to invest as a way to increase ancillary income. The allure of potentially turning a small investment into millions has also led others to try their luck with digital assets. Lastly, the constant hype around cryptocurrencies has caused even some crypto skeptics to look more closely out of FOMO (the Fear Of Missing Out).
Buying cryptocurrency requires individuals to use a crypto wallet that can interact with the blockchain that tracks cryptocurrencies. The easiest way to do this is through an online cryptocurrency exchange platform. There are many to choose from, but exchange fees can vary widely. Make sure to take all fees into account before you buy cryptocurrency.
Additionally, the transaction costs to record your transaction to the distributed ledger that is the blockchain can also vary due to the demand on computing power, energy, or volume of transactions that can impact your bottom line.
However, with the volatility in trading cryptocurrency, those who want to start investing in cryptocurrency often wonder when is the best time to buy cryptocurrency?
Key Highlights
Many investors, some less experienced than others, are buying cryptocurrencies due to the hype, “fear-of-missing-out,” and low barrier to entry.
Choosing the right positions can make or break an investment strategy, and the volatility of cryptocurrency makes it difficult to identify patterns and investment triggers.
There are certain times that are better for trading cryptocurrency than others, but ultimately the best time to buy crypto is when the buyer is feeling confident in their strategy and financially ready to make a move.
Best Time of the Day to Buy Cryptocurrency
One of the perks of trading cryptocurrency is that you can buy it whenever you want. But many investors buy and sell cryptocurrencies during the same hours that the New York Stock Exchange (“NYSE”) is open. But since you can buy and sell crypto at all hours of the day, you’ll need to know which hours are better for buying cryptocurrency.
Through analyzing months of data, you’ll begin to notice daily trends. Paying attention to cryptocurrencies with higher market capitalizations like Bitcoin, Ether, and Solana can also help newer investors determine better times of day to trade since cryptocurrency prices tend to rise and fall together.
Experts say the best time of day to buy cryptocurrency is early in the morning before the NYSE opens since values tend to rise as the day goes on. Be sure to pay attention to slight daily fluctuations across different cryptocurrencies since trends will vary from coin to coin.
Best Time of the Week to Buy Cryptocurrency
Now that you’re getting used to setting your alarm bright and early to watch cryptocurrency trends, you may begin to notice longer patterns from week to week.
Prices are lower when the market is less busy. Although you can trade cryptocurrencies at any time of day, the market is more active during typical work hours and less active early in the morning, at night, and on the weekends.
Generally, cryptocurrency prices start low on Monday and rise throughout the week. When the weekend hits, prices tend to drop until market activity begins the following Monday. Since prices are likely to be at their lowest point following a weekend of low trading activity, Monday is the best time of the week to buy cryptocurrency.
Best Time of the Month to Buy Cryptocurrency
Pricing trends carry on as weeks turn into months, and new trading patterns emerge that raise and lower the price of various cryptocurrencies over time. Since crypto trends are constantly in flux, deciding the best time of the month to buy cryptocurrency will require patience as you get to know the pricing trends of your favorite coins.
For now, the best time to buy cryptocurrency is toward the end of the month. Cryptocurrency prices tend to rise in the first weeks of the month before they collapse and continue to trend downward through the end of the month.
It’s worth reiterating again that cryptocurrencies are notorious for their volatility, which means patterns and trends that are true one month can vary widely the next. It takes time and diligence to learn how to follow cryptocurrency values and market fluctuations.
How to Time the Cryptocurrency Market
Here’s a quick recap to help you learn how to time the cryptocurrency market and get the best possible prices:
Cryptocurrencies are most active during the work week, with prices starting low on Monday morning and steadily rising until they drop over the weekend.
Pay attention to stock market trading hours as they have an effect on cryptocurrency trading, even though you can buy and sell cryptocurrencies 24/7.
Be aware of your risk tolerance by forecasting your cash flow and watching cryptocurrency market trends.
The Best Time to Buy Cryptocurrency
It can be difficult to time the cryptocurrency market due to its volatile nature, but there are times that are better for buying cryptocurrencies than others.
Just as with any other investment, cryptocurrency buyers should be aware of their risk tolerance and market conditions. But some trading strategies that work well on the stock exchange may not translate into profits for cryptocurrency trades.
The best time to buy cryptocurrency is whenever you’re ready to start investing. Don’t put more into your investment than you are willing to lose, and keep in mind the rule of dollar-cost averaging. Once you’ve decided on a position, use this guide to decide when the best time to enter the cryptocurrency market is for you. | [question]
I think Bitcoin is very interesting and want to buy some cryptocurrency but I have no idea of the best time. I live in CA and get paid at the first of the month. What is my best strategy?
=====================
[text]
What is the Best Time to Buy Cryptocurrency?
It’s traded 24 hours a day, 7 days a week by investors located all around the world. Cryptocurrency is a volatile market that can offer opportunities to make - or lose - significant sums of money
What is the Best Time to Buy Cryptocurrency?
There are many methods that equity investors use to decide when to execute a trade in the stock markets, but the same rules and trading patterns don’t always apply to cryptocurrency. It’s true that cryptocurrency buyers can make purchases within certain windows to get the best possible price. Still, the volatility of the cryptocurrency market makes it very difficult to identify reliable patterns and choose positions accordingly.
Unlike other assets, trading cryptocurrency has very low barriers to entry with tokens with a range of values. Rising inflation has also caused many to invest as a way to increase ancillary income. The allure of potentially turning a small investment into millions has also led others to try their luck with digital assets. Lastly, the constant hype around cryptocurrencies has caused even some crypto skeptics to look more closely out of FOMO (the Fear Of Missing Out).
Buying cryptocurrency requires individuals to use a crypto wallet that can interact with the blockchain that tracks cryptocurrencies. The easiest way to do this is through an online cryptocurrency exchange platform. There are many to choose from, but exchange fees can vary widely. Make sure to take all fees into account before you buy cryptocurrency.
Additionally, the transaction costs to record your transaction to the distributed ledger that is the blockchain can also vary due to the demand on computing power, energy, or volume of transactions that can impact your bottom line.
However, with the volatility in trading cryptocurrency, those who want to start investing in cryptocurrency often wonder when is the best time to buy cryptocurrency?
Key Highlights
Many investors, some less experienced than others, are buying cryptocurrencies due to the hype, “fear-of-missing-out,” and low barrier to entry.
Choosing the right positions can make or break an investment strategy, and the volatility of cryptocurrency makes it difficult to identify patterns and investment triggers.
There are certain times that are better for trading cryptocurrency than others, but ultimately the best time to buy crypto is when the buyer is feeling confident in their strategy and financially ready to make a move.
Best Time of the Day to Buy Cryptocurrency
One of the perks of trading cryptocurrency is that you can buy it whenever you want. But many investors buy and sell cryptocurrencies during the same hours that the New York Stock Exchange (“NYSE”) is open. But since you can buy and sell crypto at all hours of the day, you’ll need to know which hours are better for buying cryptocurrency.
Through analyzing months of data, you’ll begin to notice daily trends. Paying attention to cryptocurrencies with higher market capitalizations like Bitcoin, Ether, and Solana can also help newer investors determine better times of day to trade since cryptocurrency prices tend to rise and fall together.
Experts say the best time of day to buy cryptocurrency is early in the morning before the NYSE opens since values tend to rise as the day goes on. Be sure to pay attention to slight daily fluctuations across different cryptocurrencies since trends will vary from coin to coin.
Best Time of the Week to Buy Cryptocurrency
Now that you’re getting used to setting your alarm bright and early to watch cryptocurrency trends, you may begin to notice longer patterns from week to week.
Prices are lower when the market is less busy. Although you can trade cryptocurrencies at any time of day, the market is more active during typical work hours and less active early in the morning, at night, and on the weekends.
Generally, cryptocurrency prices start low on Monday and rise throughout the week. When the weekend hits, prices tend to drop until market activity begins the following Monday. Since prices are likely to be at their lowest point following a weekend of low trading activity, Monday is the best time of the week to buy cryptocurrency.
Best Time of the Month to Buy Cryptocurrency
Pricing trends carry on as weeks turn into months, and new trading patterns emerge that raise and lower the price of various cryptocurrencies over time. Since crypto trends are constantly in flux, deciding the best time of the month to buy cryptocurrency will require patience as you get to know the pricing trends of your favorite coins.
For now, the best time to buy cryptocurrency is toward the end of the month. Cryptocurrency prices tend to rise in the first weeks of the month before they collapse and continue to trend downward through the end of the month.
It’s worth reiterating again that cryptocurrencies are notorious for their volatility, which means patterns and trends that are true one month can vary widely the next. It takes time and diligence to learn how to follow cryptocurrency values and market fluctuations.
How to Time the Cryptocurrency Market
Here’s a quick recap to help you learn how to time the cryptocurrency market and get the best possible prices:
Cryptocurrencies are most active during the work week, with prices starting low on Monday morning and steadily rising until they drop over the weekend.
Pay attention to stock market trading hours as they have an effect on cryptocurrency trading, even though you can buy and sell cryptocurrencies 24/7.
Be aware of your risk tolerance by forecasting your cash flow and watching cryptocurrency market trends.
The Best Time to Buy Cryptocurrency
It can be difficult to time the cryptocurrency market due to its volatile nature, but there are times that are better for buying cryptocurrencies than others.
Just as with any other investment, cryptocurrency buyers should be aware of their risk tolerance and market conditions. But some trading strategies that work well on the stock exchange may not translate into profits for cryptocurrency trades.
The best time to buy cryptocurrency is whenever you’re ready to start investing. Don’t put more into your investment than you are willing to lose, and keep in mind the rule of dollar-cost averaging. Once you’ve decided on a position, use this guide to decide when the best time to enter the cryptocurrency market is for you.
https://corporatefinanceinstitute.com/resources/cryptocurrency/best-time-to-buy-cryptocurrency/#:~:text=Prices%20are%20lower%20when%20the,and%20rise%20throughout%20the%20week.
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | I am bored and don't have time to read this. Please lay this text out to me in layman's terms. Write it from a first-person perspective of Mr Comey talking about what he thinks about the Russia investigation. And sum it up in 6-8 good bullet points. | TESTIMONY OF JAMES COMEY, FORMER DIRECTOR, FEDERAL
BUREAU OF INVESTIGATION
Chairman BURR. Director Comey, you’re now under oath.
And I would just note to members, you will be recognized by seniority for a period up to seven minutes. And again, it is the intent
to move to a closed session no later than 1:00 p.m.
With that, Director Comey, you are recognized. You have the
floor for as long as you might need.
Director COMEY. Thank you. Mr. Chairman, Ranking Member
Warner, members of the committee: Thank you for inviting me
here to testify today. I’ve submitted my statement for the record
and I’m not going to repeat it here this morning. I thought I would
just offer some very brief introductory remarks and then I would
welcome your questions.
When I was appointed FBI Director in 2013, I understood that
I served at the pleasure of the President. Even though I was appointed to a 10-year term, which Congress created in order to underscore the importance of the FBI being outside of politics and
independent, I understood that I could be fired by a President for
any reason or for no reason at all.
And on May the 9th, when I learned that I had been fired, for
that reason I immediately came home as a private citizen. But then
the explanations, the shifting explanations, confused me and increasingly concerned me. They confused me because the President
and I had had multiple conversations about my job, both before and
after he took office, and he had repeatedly told me I was doing a
great job and he hoped I would stay. And I had repeatedly assured
him that I did intend to stay and serve out the remaining six years
of my term.
He told me repeatedly that he had talked to lots of people about
me, including our current Attorney General, and had learned that I was doing a great job and that I was extremely well-liked by the
FBI workforce.
So it confused me when I saw on television the President saying
that he actually fired me because of the Russia investigation and
learned, again from the media, that he was telling privately other
parties that my firing had relieved great pressure on the Russia investigation.
I was also confused by the initial explanation that was offered
publicly, that I was fired because of the decisions I had made during the election year. That didn’t make sense to me for a whole
bunch of reasons, including the time and all the water that had
gone under the bridge since those hard decisions that had to be
made. That didn’t make any sense to me.
And although the law required no reason at all to fire an FBI
Director, the Administration then chose to defame me and, more
importantly, the FBI by saying that the organization was in disarray, that it was poorly led, that the workforce had lost confidence
in its leader.
Those were lies, plain and simple, and I am so sorry that the FBI
workforce had to hear them and I’m so sorry that the American
people were told them. I worked every day at the FBI to help make
that great organization better. And I say ‘‘help’’ because I did nothing alone at the FBI. There are no indispensable people at the FBI.
The organization’s great strength is that its values and abilities
run deep and wide. The FBI will be fine without me. The FBI’s
mission will be relentlessly pursued by its people and that mission
is to protect the American people and uphold the Constitution of
the United States.
I will deeply miss being part of that mission, but this organization and its mission will go on long beyond me and long beyond any
particular administration.
I have a message before I close for my former colleagues at the
FBI. But first I want the American people to know this truth: The
FBI is honest. The FBI is strong. And the FBI is and always will
be independent.
And now to my former colleagues, if I may. I am so sorry that
I didn’t get the chance to say goodbye to you properly. It was the
honor of my life to serve beside you, to be part of the FBI family.
And I will miss it for the rest of my life. Thank you for standing
watch. Thank you for doing so much good for this country. Do that
good as long as ever you can.
And, Senators, I look forward to your questions.
Chairman BURR. Director, thank you for that testimony, both
oral and the written testimony that you provided to the committee
yesterday and made public to the American people.
The Chair would recognize himself first for 12 minutes, Vice
Chair for 12 minutes, based upon the agreement we have.
Director, did the Special Counsel’s Office review and/or edit your
written testimony?
Director COMEY. No.
Chairman BURR. Do you have any doubt that Russia attempted
to interfere in the 2016 elections?
Director COMEY. None. | [question]
I am bored and don't have time to read this. Please lay this text out to me in layman's terms. Write it from a first-person perspective of Mr Comey talking about what he thinks about the Russia investigation. And sum it up in 6-8 good bullet points.
=====================
[text]
TESTIMONY OF JAMES COMEY, FORMER DIRECTOR, FEDERAL
BUREAU OF INVESTIGATION
Chairman BURR. Director Comey, you’re now under oath.
And I would just note to members, you will be recognized by seniority for a period up to seven minutes. And again, it is the intent
to move to a closed session no later than 1:00 p.m.
With that, Director Comey, you are recognized. You have the
floor for as long as you might need.
Director COMEY. Thank you. Mr. Chairman, Ranking Member
Warner, members of the committee: Thank you for inviting me
here to testify today. I’ve submitted my statement for the record
and I’m not going to repeat it here this morning. I thought I would
just offer some very brief introductory remarks and then I would
welcome your questions.
When I was appointed FBI Director in 2013, I understood that
I served at the pleasure of the President. Even though I was appointed to a 10-year term, which Congress created in order to underscore the importance of the FBI being outside of politics and
independent, I understood that I could be fired by a President for
any reason or for no reason at all.
And on May the 9th, when I learned that I had been fired, for
that reason I immediately came home as a private citizen. But then
the explanations, the shifting explanations, confused me and increasingly concerned me. They confused me because the President
and I had had multiple conversations about my job, both before and
after he took office, and he had repeatedly told me I was doing a
great job and he hoped I would stay. And I had repeatedly assured
him that I did intend to stay and serve out the remaining six years
of my term.
He told me repeatedly that he had talked to lots of people about
me, including our current Attorney General, and had learned that I was doing a great job and that I was extremely well-liked by the
FBI workforce.
So it confused me when I saw on television the President saying
that he actually fired me because of the Russia investigation and
learned, again from the media, that he was telling privately other
parties that my firing had relieved great pressure on the Russia investigation.
I was also confused by the initial explanation that was offered
publicly, that I was fired because of the decisions I had made during the election year. That didn’t make sense to me for a whole
bunch of reasons, including the time and all the water that had
gone under the bridge since those hard decisions that had to be
made. That didn’t make any sense to me.
And although the law required no reason at all to fire an FBI
Director, the Administration then chose to defame me and, more
importantly, the FBI by saying that the organization was in disarray, that it was poorly led, that the workforce had lost confidence
in its leader.
Those were lies, plain and simple, and I am so sorry that the FBI
workforce had to hear them and I’m so sorry that the American
people were told them. I worked every day at the FBI to help make
that great organization better. And I say ‘‘help’’ because I did nothing alone at the FBI. There are no indispensable people at the FBI.
The organization’s great strength is that its values and abilities
run deep and wide. The FBI will be fine without me. The FBI’s
mission will be relentlessly pursued by its people and that mission
is to protect the American people and uphold the Constitution of
the United States.
I will deeply miss being part of that mission, but this organization and its mission will go on long beyond me and long beyond any
particular administration.
I have a message before I close for my former colleagues at the
FBI. But first I want the American people to know this truth: The
FBI is honest. The FBI is strong. And the FBI is and always will
be independent.
And now to my former colleagues, if I may. I am so sorry that
I didn’t get the chance to say goodbye to you properly. It was the
honor of my life to serve beside you, to be part of the FBI family.
And I will miss it for the rest of my life. Thank you for standing
watch. Thank you for doing so much good for this country. Do that
good as long as ever you can.
And, Senators, I look forward to your questions.
Chairman BURR. Director, thank you for that testimony, both
oral and the written testimony that you provided to the committee
yesterday and made public to the American people.
The Chair would recognize himself first for 12 minutes, Vice
Chair for 12 minutes, based upon the agreement we have.
Director, did the Special Counsel’s Office review and/or edit your
written testimony?
Director COMEY. No.
Chairman BURR. Do you have any doubt that Russia attempted
to interfere in the 2016 elections?
Director COMEY. None.
https://www.govinfo.gov/content/pkg/CHRG-115shrg25890/pdf/CHRG-115shrg25890.pdf
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Using only information found in the provided context answer the prompt's question using a summary paragraph followed by a bulleted list The bulleted list should be concise but detailed enough that it covers all aspect of the line item. A conclusion paragraph should be included that is no more than 3 sentences and leaves the conversation open for discussion. | Summarize the benefit that enhanced firewall capabilities can have on network security. | Cloud Access Security Broker (CASB)
Given the increasing subscription to multiple clouds in many enterprises, one of the most
important pieces of software is the cloud access security broker (CASB). It sits on the network
between the cloud service customers (CSC) and the cloud service providers (CSP). The evolution
of CASB functionality can be traced as follows [3]:
• The primary function of the first generation of CASBs was the discovery of resources.
They provided visibility into all of the cloud resources that the enterprise users accessed,
thus preventing or minimizing the chances of shadow IT. Shadow IT is the practice of
some users using cloud applications that are not authorized by the enterprise IT
management from home or the office using enterprise desktops. An example of this is the
use of unapproved software as-a-service (SaaS) applications for file sharing, social
media, collaboration, and web conferencing [4]. This generation of CASBs also provides
some statistics, such as software-as-a-service (SaaS) utilization.
• The current generation of CASBs enforces security and governance policies for cloud
applications, thus enabling enterprises to extend their on-premises policies to the cloud.
Specific security services provided by CASBs include:
o Protection of enterprise data that live in cloud service providers’ servers (due to
SaaS or IaaS subscriptions), as well as data inflow and data outflow (i.e., Data
Loss Prevention [DLP] capabilities) from those servers.
o Tracking of threats, such as account hijacking and other malicious activities, some
of which can detect anomalies in users’ cloud access behavior (through robust
User and Entity Behavior Analytics (UEBA) functionality) and stop insider
threats and advanced cyberattacks [5].
o Detection of misconfigurations in the enterprise’s subscribed IaaS and cloud
servers. These misconfigurations pose serious security risks, such as data
breaches. Alerts generated by CASB due to misconfigurations in the enterprise’s
IaaS deployments direct the enterprise to follow guidelines, such as the Center for
Internet Security’s (CIS) benchmarks for public cloud services, thus improving
the overall security profile of the enterprise for cloud access [4].
Enhanced Firewall Capabilities
The security functions in firewalls have enlarged alongside the changing network landscape.
Firewalls started as hardware appliances that prevented network packets from a device with a
particular network location (e.g., combination of Internet Protocol (IP) address and port) in one
8
NIST SP 800-215
November 2022
Guide to a Secure Enterprise
Network Landscape
subnet (e.g., external network or internet) from accessing a device on another network location
or subnet (e.g., intranet or Demilitarized Zone (DMZ) or corporate network). In that setup, it
primarily secured a network perimeter. The evolution of firewall functions can be traced based
on the following feature sets [6]:
• Packet filters and network address translation: Packet filtering and Network address
translation (NAT) are used to monitor and control packets moving across a network
interface, apply predetermined security rules, and obscure the internal network from the
public internet.
• Stateful inspection: Stateful firewalling, also known as dynamic packet filtering, monitors
the state of connections and makes determinations as to what types of data packets belong
to a known active connection and can be allowed to pass through the firewall.
• Deep packet inspection (DPI): This feature, also known as packet sniffing, examines the
content of packets (both the header and the payload, unlike the stateful inspection that
inspects only the packet header). In addition to the capability provided by stateful
inspection, this has capabilities related to finding hidden threats within the data stream,
such as attempts at data exfiltration, violations of content policies, malware, and more.
• Threat detection and response: Modern firewalls can gather and analyze enough data
across multiple packets and sessions to detect threats and security incidents targeted at a
particular system or a family of systems. These data from multiple firewalls can also be
directed toward security information and event management (SIEM) and correlated with
data from other security tools and IT systems to detect enterprise-wide attacks that span
multiple systems and network layers. In addition, these data can be used to understand
evolving threats and define new access rules, attack patterns, and defensive strategies [6].
• Logging and auditing capabilities: Logging and auditing capabilities result in the
construction of network events that can be used to identify patterns of performance and
security issues.
• Access control functions: Access control functions enforce granular sophisticated access
control policies.
• Multiple locations and functions: Firewalls reside at different locations to perform
different functions. Firewalls at the network edge perform the network perimeter
protection function by filtering disallowed sources and destinations and blocking the
packets of potential threats. Firewalls inside a data center can segment the internal
network to prevent the lateral movement of traffic and isolate sensitive resources (e.g.,
services and data stores). Device-based firewalls prevent malicious traffic in and out of
endpoints.
• Open Application Programming Interfaces (APIs): These enable integration with many
networking products that provide additional security capabilities.
• Policy Composition Capabilities: Some firewalls can have the capabilities to merge
policies at enforcement time so as to ensure that consistent policies are applied to
different classes of users (e.g., those on-premises and on private and public clouds).
• Web application firewalls (WAF): This class of firewalls has been used ever since web
applications accessed through web protocols, such as Hypertext Transfer Protocol
9
NIST SP 800-215
November 2022
Guide to a Secure Enterprise
Network Landscape
(HTTP), came into existence. A feature advancement in this class of firewalls is
advanced Uniform Resource Locator (URL) filtering. This is the ability to detect traffic
from malicious URLs and prevent web-based threats and attacks by receiving real-time
data analyzed by machine learning algorithms [7][8]. Specifically, this class of firewalls
can inspect threat vectors for SQL Injection, operating system (OS) command injections,
and cross-site scripting attacks, as well as prevent inbound attacks. They are used in
content delivery networks (CDN) and to prevent distributed denial-of-service (DDoS)
attacks. Some additional features found in this class of firewalls are:
a. Ability to specify an allowable list of services (control at the application level)
b. Traffic matches the intent of allowed ports
c. Filtering of some unwanted protocols
Appliance-set with Integrated Functions
• Unified threat management (or UTMs): UTM devices combine many of the most critical
security functions – firewall, IPS, VPN concentrator, gateway antivirus, content filtering,
and WAN load balancing – into a single device, usually with a unified management
console.
• Next-generation firewall (NGFW): The distinguishing feature of NGFW is application
data awareness. It can look at data not only at layers 3 and 4 of an Open Systems
Interconnection (OSI) stack but also at layer 7 – the application level. Its capabilities
extend beyond packet filtering and stateful inspection. There are multiple deployment
options available for NGFWs, such as an appliance in the data center, as a software
running in a VM in a cloud, or as a cloud service (FWaaS). Some capabilities of NGFW
include [9]:
a. Deep Packet Inspection (DPI)
b. TLS decryption and inspection of packet payload
c. Intrusion prevention system (IPS) feature
• Web application and API protection (WAAP): This is a comprehensive security approach
and an enhancement over WAF. WAF is an integral component for API security, BOT
(abbreviation for Robot) defense, and DDOS protection.
a. These can be offered as a product suite or as a cloud-based service [10][11].
b. Secure web gateway (SWGs): SWGs are appliances utilized for policy-based
access to and control of cloud-based applications as well as governance of access
to the open web for enterprise users in ubiquitous locations (e.g., headquarters,
branch offices, home, remote locations). An SWG is fundamentally a web filter
that protects outbound user traffic through HTTP or Hypertext Transfer Protocol
Secure (HTTPS) inspection [12]. It also protects user endpoints from web-based
threats that can occur when users click on links to malicious websites or to
websites infected with malware. They centralize control, visibility, and reporting
across many locations and types of users. They are not a replacement for WAFs,
10
NIST SP 800-215
November 2022
Network Security Automation Tools
Guide to a Secure Enterprise
Network Landscape
which protect websites housed in enterprise data centers and large headquarter
sites from inbound attacks. | Using only information available from the text below, summarize the benefit that enhanced firewall capabilities can have on network security. The summary should be written in a single paragraph that is followed by a bulleted list that highlights the key points required to answering the question. The bullet points should be concise but descriptive. Include a conclusion that is no more than 3 sentences long and leaves the response open to discussion.
Cloud Access Security Broker (CASB) Given the increasing subscription to multiple clouds in many enterprises, one of the most important pieces of software is the cloud access security broker (CASB). It sits on the network between the cloud service customers (CSC) and the cloud service providers (CSP). The evolution of CASB functionality can be traced as follows [3]: • The primary function of the first generation of CASBs was the discovery of resources. They provided visibility into all of the cloud resources that the enterprise users accessed, thus preventing or minimizing the chances of shadow IT. Shadow IT is the practice of some users using cloud applications that are not authorized by the enterprise IT management from home or the office using enterprise desktops. An example of this is the use of unapproved software as-a-service (SaaS) applications for file sharing, social media, collaboration, and web conferencing [4]. This generation of CASBs also provides some statistics, such as software-as-a-service (SaaS) utilization. • The current generation of CASBs enforces security and governance policies for cloud applications, thus enabling enterprises to extend their on-premises policies to the cloud. Specific security services provided by CASBs include: o Protection of enterprise data that live in cloud service providers’ servers (due to SaaS or IaaS subscriptions), as well as data inflow and data outflow (i.e., Data Loss Prevention [DLP] capabilities) from those servers. o Tracking of threats, such as account hijacking and other malicious activities, some of which can detect anomalies in users’ cloud access behavior (through robust User and Entity Behavior Analytics (UEBA) functionality) and stop insider threats and advanced cyberattacks [5]. o Detection of misconfigurations in the enterprise’s subscribed IaaS and cloud servers. These misconfigurations pose serious security risks, such as data breaches. Alerts generated by CASB due to misconfigurations in the enterprise’s IaaS deployments direct the enterprise to follow guidelines, such as the Center for Internet Security’s (CIS) benchmarks for public cloud services, thus improving the overall security profile of the enterprise for cloud access [4]. Enhanced Firewall Capabilities The security functions in firewalls have enlarged alongside the changing network landscape. Firewalls started as hardware appliances that prevented network packets from a device with a particular network location (e.g., combination of Internet Protocol (IP) address and port) in one 8 NIST SP 800-215 November 2022 Guide to a Secure Enterprise Network Landscape subnet (e.g., external network or internet) from accessing a device on another network location or subnet (e.g., intranet or Demilitarized Zone (DMZ) or corporate network). In that setup, it primarily secured a network perimeter. The evolution of firewall functions can be traced based on the following feature sets [6]: • Packet filters and network address translation: Packet filtering and Network address translation (NAT) are used to monitor and control packets moving across a network interface, apply predetermined security rules, and obscure the internal network from the public internet. • Stateful inspection: Stateful firewalling, also known as dynamic packet filtering, monitors the state of connections and makes determinations as to what types of data packets belong to a known active connection and can be allowed to pass through the firewall. • Deep packet inspection (DPI): This feature, also known as packet sniffing, examines the content of packets (both the header and the payload, unlike the stateful inspection that inspects only the packet header). In addition to the capability provided by stateful inspection, this has capabilities related to finding hidden threats within the data stream, such as attempts at data exfiltration, violations of content policies, malware, and more. • Threat detection and response: Modern firewalls can gather and analyze enough data across multiple packets and sessions to detect threats and security incidents targeted at a particular system or a family of systems. These data from multiple firewalls can also be directed toward security information and event management (SIEM) and correlated with data from other security tools and IT systems to detect enterprise-wide attacks that span multiple systems and network layers. In addition, these data can be used to understand evolving threats and define new access rules, attack patterns, and defensive strategies [6]. • Logging and auditing capabilities: Logging and auditing capabilities result in the construction of network events that can be used to identify patterns of performance and security issues. • Access control functions: Access control functions enforce granular sophisticated access control policies. • Multiple locations and functions: Firewalls reside at different locations to perform different functions. Firewalls at the network edge perform the network perimeter protection function by filtering disallowed sources and destinations and blocking the packets of potential threats. Firewalls inside a data center can segment the internal network to prevent the lateral movement of traffic and isolate sensitive resources (e.g., services and data stores). Device-based firewalls prevent malicious traffic in and out of endpoints. • Open Application Programming Interfaces (APIs): These enable integration with many networking products that provide additional security capabilities. • Policy Composition Capabilities: Some firewalls can have the capabilities to merge policies at enforcement time so as to ensure that consistent policies are applied to different classes of users (e.g., those on-premises and on private and public clouds). • Web application firewalls (WAF): This class of firewalls has been used ever since web applications accessed through web protocols, such as Hypertext Transfer Protocol 9 NIST SP 800-215 November 2022 Guide to a Secure Enterprise Network Landscape (HTTP), came into existence. A feature advancement in this class of firewalls is advanced Uniform Resource Locator (URL) filtering. This is the ability to detect traffic from malicious URLs and prevent web-based threats and attacks by receiving real-time data analyzed by machine learning algorithms [7][8]. Specifically, this class of firewalls can inspect threat vectors for SQL Injection, operating system (OS) command injections, and cross-site scripting attacks, as well as prevent inbound attacks. They are used in content delivery networks (CDN) and to prevent distributed denial-of-service (DDoS) attacks. Some additional features found in this class of firewalls are: a. Ability to specify an allowable list of services (control at the application level) b. Traffic matches the intent of allowed ports c. Filtering of some unwanted protocols Appliance-set with Integrated Functions • Unified threat management (or UTMs): UTM devices combine many of the most critical security functions – firewall, IPS, VPN concentrator, gateway antivirus, content filtering, and WAN load balancing – into a single device, usually with a unified management console. • Next-generation firewall (NGFW): The distinguishing feature of NGFW is application data awareness. It can look at data not only at layers 3 and 4 of an Open Systems Interconnection (OSI) stack but also at layer 7 – the application level. Its capabilities extend beyond packet filtering and stateful inspection. There are multiple deployment options available for NGFWs, such as an appliance in the data center, as a software running in a VM in a cloud, or as a cloud service (FWaaS). Some capabilities of NGFW include [9]: a. Deep Packet Inspection (DPI) b. TLS decryption and inspection of packet payload c. Intrusion prevention system (IPS) feature • Web application and API protection (WAAP): This is a comprehensive security approach and an enhancement over WAF. WAF is an integral component for API security, BOT (abbreviation for Robot) defense, and DDOS protection. a. These can be offered as a product suite or as a cloud-based service [10][11]. b. Secure web gateway (SWGs): SWGs are appliances utilized for policy-based access to and control of cloud-based applications as well as governance of access to the open web for enterprise users in ubiquitous locations (e.g., headquarters, branch offices, home, remote locations). An SWG is fundamentally a web filter that protects outbound user traffic through HTTP or Hypertext Transfer Protocol Secure (HTTPS) inspection [12]. It also protects user endpoints from web-based threats that can occur when users click on links to malicious websites or to websites infected with malware. They centralize control, visibility, and reporting across many locations and types of users. They are not a replacement for WAFs, 10 NIST SP 800-215 November 2022 Network Security Automation Tools Guide to a Secure Enterprise Network Landscape which protect websites housed in enterprise data centers and large headquarter sites from inbound attacks.
|
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | Tell me the advantages and disadvantages of smartphones in our lives with one or two sentences for each one. Also give me three examples for healthcare services of smartphones and one example of being dangerous for our lives. | We are living in the era of gadgets and smartphones, and communication has never been so easy; with social media, we’re always connected to our friends and millions of other people, no matter where we are. All we need is a smartphone with an internet connection.
Mobile phones have become part of our daily lives and besides communication, we have available a vast variety of apps that can make our daily life a lot easier. Though the cost of app development is rising, the number of apps in app stores is increasing. Some of these apps had been optimized for mobile apps stores so that we can find them easier.
However, being in the business of making apps, we must question what’s the impact of mobile phones in our lives and society? In this article, we’ll look into what are the positive and negative effects of using mobile phones on a daily basis.
Negative effects of mobile phones in our lives
1. Waste of time
As much we love what nowadays smartphones can do for us, this technology also has a downside. A recent study from the digital analytic firm Flurry shows that we surprisingly spend on average almost 3-4 hours a day staring at our smart devices, totalizing nearly one day every week! One day, that’s right!
2. Addiction
Phones addiction has a name: nomophobia, the fear of being out of cell phone contact. Therefore, not just spending too much time on our devices is a sign of addiction, but the fear of not having them on us as well. Like any other form of addiction, studies show that people that are addicted to their phones often show signs of depression, anxiety, and other forms of mental health problems.
3. Distraction
Another study, this time from Florida State University, says that smartphones notifications can impair our concentration, even being short in duration they cause enough of a distraction to affect your ability to focus on a given task, decreasing your performance by prompting task-irrelevant thoughts and mind-wandering. This can be very dangerous in some specific situations, like driving, for instance, a simple notification can cause really serious accidents.
4. Affecting social skills
Besides the problems mentioned above, it also has a huge impact on people’s social lives, people are getting more disconnected from the real world, they put their phones ahead of human interaction, it’s getting harder to see people talking to each other in public places, they’re always too busy with their mobile devices, checking notifications, sending messages or just sharing a new video. Our social skills seem to diminish constantly due to the overuse of smartphones and turning us into “smombie”.
“Smartphone zombies” or “smombie” regularly cross our ways, perhaps you’re not familiar with the term but most likely you saw one today. They’re the people on public streets and places who walk slowly in peculiar ways with their eyes and fingers focused on your phone display. But it isn’t just road safety at stake here: think about how often they bump into things.
The technology that drives mobile devices has improved a lot since they appeared, and especially in the last ten years. Mobile gadgets have gotten smaller, more powerful, and very useful. They are everywhere and play increasingly greater roles in the lives of most everyone.
Positive effects of mobile phones in our life
1. Communication
Besides the dark part of mobile technology, in the form of phones, tablets, and notebooks, is making our lives better than ever before. It does this in many ways, not the least of which is making communications routine. We can be in touch with those we need to reach, whether work-related or personal in nature. Mobile technology has changed the way we do business for the better.
Never have we been able to share so much with friends and family as we can today, and that is in great part due to mobile technology. Without mobile devices and the technology behind them, participation in social networking would never have grown as much as it has. Sharing seemingly trivial information like where we are, what we are doing, and what that looks like significantly impacts our relationships with friends and loved ones.
Mobile technology has given a voice to those otherwise cut off from the world during cataclysmic events. That voice can reach out for help when local tragedy strikes, and for the first time, these people are not alone. They can share their plight using mobile communication through text, voice, and, most importantly, images, and bring about real change.
2. Daily utilities
Mobile phones have changed the way we live our lives. Now, not only can they help us stay connected with friends and family over social media or talk to someone on a video call without paying for data usage, but they also make everything from booking hotels and cabs to capturing memories easier than ever before thanks to their built-in cameras! We have more information in our hands than at any time in history. It has become second nature to quickly lookup helpful resources for whatever activity we need to do. Our gadgets can even anticipate what information we need and present it to us when it is most useful.
3. Healthcare services
While mobile phones has improved our daily lives on many levels, it has profoundly raised the quality of life for many. Healthcare is an area that has embraced mobile technology, and while it’s still in the infancy of adoption of this technology, it is already making profound improvements for many.
Healthcare providers get a quick medical opinion, through medical apps like this one, or they can review home medical tests from anywhere and make crucial changes to the patient’s care. Medical staff members can receive pacemaker tests remotely using a phone and change the programming of the device to address changes in the patient’s condition. Doctors can see intricate diagnostic images on phones and find conditions that need immediate treatment, all while the patient is comfortable at home.
Villagers in third-world countries who have no local healthcare can be diagnosed and have treatment prescribed by distant healthcare providers. Patients in areas experiencing significant problems with counterfeit medications can use a phone at the point of purchase to confirm if a medication is legitimate. This is saving lives and improving healthcare every day for those affected.
Children with ailments such as autism are using tablets to help them focus and communicate with those around them. Patients recovering from strokes and brain injuries are using tablets to great effect in their recoveries. Patients of all ages are using mobile devices to communicate with healthcare providers and loved ones as they never could before.
People born without hearing are having implants that can be programmed by wireless technology that allows them to hear their children speak for the very first time. Text messaging on phones has made a tremendous impact on communication for the deaf.
Diabetics can monitor their glucose level and have it wirelessly transferred to a small insulin pump that injects just the right amount to keep them where they need to be.
Blind individuals can use mobile phones to not only improve their lives but also help achieve an incredible level of independence. Not only do these phones speak to the blind so they know what is displayed on the screen, but they also have software that can safely guide them out in busy cities. Mobile technology can help the blind pick out clothes for the day that match. The technology on smartphones can scan the change received from purchase and tell them how much was given. | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
Tell me the advantages and disadvantages of smartphones in our lives with one or two sentences for each one. Also give me three examples for healthcare services of smartphones and one example of being dangerous for our lives.
We are living in the era of gadgets and smartphones, and communication has never been so easy; with social media, we’re always connected to our friends and millions of other people, no matter where we are. All we need is a smartphone with an internet connection.
Mobile phones have become part of our daily lives and besides communication, we have available a vast variety of apps that can make our daily life a lot easier. Though the cost of app development is rising, the number of apps in app stores is increasing. Some of these apps had been optimized for mobile apps stores so that we can find them easier.
However, being in the business of making apps, we must question what’s the impact of mobile phones in our lives and society? In this article, we’ll look into what are the positive and negative effects of using mobile phones on a daily basis.
Negative effects of mobile phones in our lives
1. Waste of time
As much we love what nowadays smartphones can do for us, this technology also has a downside. A recent study from the digital analytic firm Flurry shows that we surprisingly spend on average almost 3-4 hours a day staring at our smart devices, totalizing nearly one day every week! One day, that’s right!
2. Addiction
Phones addiction has a name: nomophobia, the fear of being out of cell phone contact. Therefore, not just spending too much time on our devices is a sign of addiction, but the fear of not having them on us as well. Like any other form of addiction, studies show that people that are addicted to their phones often show signs of depression, anxiety, and other forms of mental health problems.
3. Distraction
Another study, this time from Florida State University, says that smartphones notifications can impair our concentration, even being short in duration they cause enough of a distraction to affect your ability to focus on a given task, decreasing your performance by prompting task-irrelevant thoughts and mind-wandering. This can be very dangerous in some specific situations, like driving, for instance, a simple notification can cause really serious accidents.
4. Affecting social skills
Besides the problems mentioned above, it also has a huge impact on people’s social lives, people are getting more disconnected from the real world, they put their phones ahead of human interaction, it’s getting harder to see people talking to each other in public places, they’re always too busy with their mobile devices, checking notifications, sending messages or just sharing a new video. Our social skills seem to diminish constantly due to the overuse of smartphones and turning us into “smombie”.
“Smartphone zombies” or “smombie” regularly cross our ways, perhaps you’re not familiar with the term but most likely you saw one today. They’re the people on public streets and places who walk slowly in peculiar ways with their eyes and fingers focused on your phone display. But it isn’t just road safety at stake here: think about how often they bump into things.
The technology that drives mobile devices has improved a lot since they appeared, and especially in the last ten years. Mobile gadgets have gotten smaller, more powerful, and very useful. They are everywhere and play increasingly greater roles in the lives of most everyone.
Positive effects of mobile phones in our life
1. Communication
Besides the dark part of mobile technology, in the form of phones, tablets, and notebooks, is making our lives better than ever before. It does this in many ways, not the least of which is making communications routine. We can be in touch with those we need to reach, whether work-related or personal in nature. Mobile technology has changed the way we do business for the better.
Never have we been able to share so much with friends and family as we can today, and that is in great part due to mobile technology. Without mobile devices and the technology behind them, participation in social networking would never have grown as much as it has. Sharing seemingly trivial information like where we are, what we are doing, and what that looks like significantly impacts our relationships with friends and loved ones.
Mobile technology has given a voice to those otherwise cut off from the world during cataclysmic events. That voice can reach out for help when local tragedy strikes, and for the first time, these people are not alone. They can share their plight using mobile communication through text, voice, and, most importantly, images, and bring about real change.
2. Daily utilities
Mobile phones have changed the way we live our lives. Now, not only can they help us stay connected with friends and family over social media or talk to someone on a video call without paying for data usage, but they also make everything from booking hotels and cabs to capturing memories easier than ever before thanks to their built-in cameras! We have more information in our hands than at any time in history. It has become second nature to quickly lookup helpful resources for whatever activity we need to do. Our gadgets can even anticipate what information we need and present it to us when it is most useful.
3. Healthcare services
While mobile phones has improved our daily lives on many levels, it has profoundly raised the quality of life for many. Healthcare is an area that has embraced mobile technology, and while it’s still in the infancy of adoption of this technology, it is already making profound improvements for many.
Healthcare providers get a quick medical opinion, through medical apps like this one, or they can review home medical tests from anywhere and make crucial changes to the patient’s care. Medical staff members can receive pacemaker tests remotely using a phone and change the programming of the device to address changes in the patient’s condition. Doctors can see intricate diagnostic images on phones and find conditions that need immediate treatment, all while the patient is comfortable at home.
Villagers in third-world countries who have no local healthcare can be diagnosed and have treatment prescribed by distant healthcare providers. Patients in areas experiencing significant problems with counterfeit medications can use a phone at the point of purchase to confirm if a medication is legitimate. This is saving lives and improving healthcare every day for those affected.
Children with ailments such as autism are using tablets to help them focus and communicate with those around them. Patients recovering from strokes and brain injuries are using tablets to great effect in their recoveries. Patients of all ages are using mobile devices to communicate with healthcare providers and loved ones as they never could before.
People born without hearing are having implants that can be programmed by wireless technology that allows them to hear their children speak for the very first time. Text messaging on phones has made a tremendous impact on communication for the deaf.
Diabetics can monitor their glucose level and have it wirelessly transferred to a small insulin pump that injects just the right amount to keep them where they need to be.
Blind individuals can use mobile phones to not only improve their lives but also help achieve an incredible level of independence. Not only do these phones speak to the blind so they know what is displayed on the screen, but they also have software that can safely guide them out in busy cities. Mobile technology can help the blind pick out clothes for the day that match. The technology on smartphones can scan the change received from purchase and tell them how much was given.
https://blog.mobiversal.com/the-impact-of-mobile-technology-in-our-daily-life.html |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | Why does having an autoimmune disease like lupus make it complicated for woman who have it to have healthy pregnancies and to have healthy babies. | Lupus tends to appear in women of childbearing age. It can affect pregnancy, however most women with lupus are able to have children. All pregnancies will need careful medical monitoring because of the risk of complications. It’s generally best to wait six months after a flare of symptoms and ideally have no active lupus symptoms prior to conception.
How lupus affects pregnancy
Lupus is a chronic condition that results from a malfunctioning immune system.
The immune system is designed to identify foreign bodies (such as bacteria and viruses) and attack them to keep us healthy. However, in the case of lupus, your immune system mistakenly attacks one or many different types of tissue in the body, such as the skin, joints, muscles, nerves, kidneys, heart or lungs. The result of this damage is ongoing inflammation and pain.
For these reasons, it’s important that you plan your pregnancy carefully.
The healthier you are before you get pregnant, the greater the chance that you will have a healthy pregnancy and a healthy baby. Aim to have your condition under control and be in the best possible health.
Talk with your doctor and specialist before you get pregnant. They may need to make important changes to your medication to ensure a safe pregnancy. Some medications are safe to take while you’re pregnant however others, like methotrexate, shouldn’t be taken. You may need to stop taking some medications months before trying to get pregnant as they can be harmful to your baby. Your doctors will help you plan this.
In some cases, there is a reduction in lupus symptoms during pregnancy. Your lupus is more likely to be stable throughout your pregnancy if your condition was stable before conceiving.
Complications of pregnancy
Most women with lupus are able to have a healthy baby, however sometimes complications can occur. That’s why it’s so important you plan your pregnancy and work closely with your healthcare team to ensure you’re as healthy as possible before, during and after your pregnancy.
It’s also important that you know the possible problems that may occur so that you can be treated immediately. Many of these issues can be prevented or treated effectively if they’re dealt with early. Some of the problems that can occur during pregnancy for women with lupus include:
flares of your lupus symptoms may occur during pregnancy or immediately after you deliver, however this is less likely if your condition was stable before you became pregnant
high blood pressure (hypertension)
your baby may be born with low birth weight
pre-eclampsia – symptoms include high blood pressure and excessive amounts of protein lost through your urine
premature labour
increased risk of blood clots in the legs or lungs
increased risk of miscarriage
increased risk of emergency caesarean section
increased risk of excessive bleeding after delivery.
Medical care before and during pregnancy
It’s important that you have consistent and adequate medical care before and during your pregnancy. Discuss your plans to become pregnant with your doctor and specialist before you conceive. They can advise you of the best time to fall pregnant – it’s advisable to have had no lupus symptoms for at least six months prior to conception. They can also let you know about any particular risks you may face and whether your medication needs to be changed. Some medication taken for lupus can cross the placenta and pose a threat to your baby.
Once you have become pregnant, it's vital that you receive proper antenatal care to anticipate, prevent and solve any problems that may occur. You will need to contact your treating doctor in case your treatment needs to be changed or further tests are required.
It’s also important that you consult closely with both a rheumatologist and a specialist obstetrician throughout your pregnancy to lessen the risk of complications and monitor your baby's growth.
Lupus flares and normal pregnancy symptoms
Sometimes, it can be difficult to distinguish between a lupus flare and normal pregnancy symptoms. For this reason it’s important that you work closely with your healthcare team and obstetrician.
Some of the symptoms of pregnancy that may mimic those of lupus include:
fatigue
build-up of fluid in the joints
skin changes, such as rashes, flushes or darkening
hair loss following childbirth
shortness of breath
joint pain.
Lupus pregnancies and increased rate of premature birth and miscarriage
During pregnancy, the growing baby is nourished by the placenta. About one third of women with lupus have antibodies that may cause blood clots and interfere with the proper functioning of the placenta.
This is most likely to happen in the second trimester. The placenta isn’t able to supply the baby with sufficient nourishment and the baby’s growth is slowed. This may require early delivery via caesarean section. If the baby is born after 30 weeks’ gestation, or is at least 1.3 kg in weight, its chances of survival are good.
Your doctor can screen for antiphospholipid antibodies, and if they are found, may prescribe a blood thinner to help prevent blood clots. This can help prevent miscarriage in many women.
Pre-eclampsia is a condition that involves increased blood pressure, fluid retention and protein in the urine. It occurs in one in five women with lupus. If left untreated it can endanger the life of both the woman and her baby. Pre-eclampsia can be treated. However, depending on the severity, it may also require early delivery.
Neonatal lupus
Around one third of women with lupus have antibodies that may cause lupus-like symptoms in their baby once it‘s born. This is known as neonatal lupus. Symptoms may include skin rash, unusual blood count and, rarely, heartbeat irregularities. This is not SLE.
In babies who don’t experience heartbeat irregularities, all symptoms of neonatal lupus usually resolve by three to six months of age. Heartbeat irregularities can be successfully treated.
Lupus and pregnancy delay advice
Some women with lupus should delay pregnancy and discuss their plan with their treating doctor when they are planning to have a baby.
They include:
women whose lupus is active
women taking medication such as methotrexate, mycophenolate, or cyclophosphamide
women with kidney disease
women with previous thrombosis or miscarriage.
If you have any questions about your condition, medications and pregnancy, talk with your doctor.
Where to get help
Your GP (doctor)
Obstetrician
A specialist (often a rheumatologist, nephrologist, immunologist or dermatologist) | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
Why does having an autoimmune disease like lupus make it complicated for woman who have it to have healthy pregnancies and to have healthy babies.
{passage 0}
==========
Lupus tends to appear in women of childbearing age. It can affect pregnancy, however most women with lupus are able to have children. All pregnancies will need careful medical monitoring because of the risk of complications. It’s generally best to wait six months after a flare of symptoms and ideally have no active lupus symptoms prior to conception.
How lupus affects pregnancy
Lupus is a chronic condition that results from a malfunctioning immune system.
The immune system is designed to identify foreign bodies (such as bacteria and viruses) and attack them to keep us healthy. However, in the case of lupus, your immune system mistakenly attacks one or many different types of tissue in the body, such as the skin, joints, muscles, nerves, kidneys, heart or lungs. The result of this damage is ongoing inflammation and pain.
For these reasons, it’s important that you plan your pregnancy carefully.
The healthier you are before you get pregnant, the greater the chance that you will have a healthy pregnancy and a healthy baby. Aim to have your condition under control and be in the best possible health.
Talk with your doctor and specialist before you get pregnant. They may need to make important changes to your medication to ensure a safe pregnancy. Some medications are safe to take while you’re pregnant however others, like methotrexate, shouldn’t be taken. You may need to stop taking some medications months before trying to get pregnant as they can be harmful to your baby. Your doctors will help you plan this.
In some cases, there is a reduction in lupus symptoms during pregnancy. Your lupus is more likely to be stable throughout your pregnancy if your condition was stable before conceiving.
Complications of pregnancy
Most women with lupus are able to have a healthy baby, however sometimes complications can occur. That’s why it’s so important you plan your pregnancy and work closely with your healthcare team to ensure you’re as healthy as possible before, during and after your pregnancy.
It’s also important that you know the possible problems that may occur so that you can be treated immediately. Many of these issues can be prevented or treated effectively if they’re dealt with early. Some of the problems that can occur during pregnancy for women with lupus include:
flares of your lupus symptoms may occur during pregnancy or immediately after you deliver, however this is less likely if your condition was stable before you became pregnant
high blood pressure (hypertension)
your baby may be born with low birth weight
pre-eclampsia – symptoms include high blood pressure and excessive amounts of protein lost through your urine
premature labour
increased risk of blood clots in the legs or lungs
increased risk of miscarriage
increased risk of emergency caesarean section
increased risk of excessive bleeding after delivery.
Medical care before and during pregnancy
It’s important that you have consistent and adequate medical care before and during your pregnancy. Discuss your plans to become pregnant with your doctor and specialist before you conceive. They can advise you of the best time to fall pregnant – it’s advisable to have had no lupus symptoms for at least six months prior to conception. They can also let you know about any particular risks you may face and whether your medication needs to be changed. Some medication taken for lupus can cross the placenta and pose a threat to your baby.
Once you have become pregnant, it's vital that you receive proper antenatal care to anticipate, prevent and solve any problems that may occur. You will need to contact your treating doctor in case your treatment needs to be changed or further tests are required.
It’s also important that you consult closely with both a rheumatologist and a specialist obstetrician throughout your pregnancy to lessen the risk of complications and monitor your baby's growth.
Lupus flares and normal pregnancy symptoms
Sometimes, it can be difficult to distinguish between a lupus flare and normal pregnancy symptoms. For this reason it’s important that you work closely with your healthcare team and obstetrician.
Some of the symptoms of pregnancy that may mimic those of lupus include:
fatigue
build-up of fluid in the joints
skin changes, such as rashes, flushes or darkening
hair loss following childbirth
shortness of breath
joint pain.
Lupus pregnancies and increased rate of premature birth and miscarriage
During pregnancy, the growing baby is nourished by the placenta. About one third of women with lupus have antibodies that may cause blood clots and interfere with the proper functioning of the placenta.
This is most likely to happen in the second trimester. The placenta isn’t able to supply the baby with sufficient nourishment and the baby’s growth is slowed. This may require early delivery via caesarean section. If the baby is born after 30 weeks’ gestation, or is at least 1.3 kg in weight, its chances of survival are good.
Your doctor can screen for antiphospholipid antibodies, and if they are found, may prescribe a blood thinner to help prevent blood clots. This can help prevent miscarriage in many women.
Pre-eclampsia is a condition that involves increased blood pressure, fluid retention and protein in the urine. It occurs in one in five women with lupus. If left untreated it can endanger the life of both the woman and her baby. Pre-eclampsia can be treated. However, depending on the severity, it may also require early delivery.
Neonatal lupus
Around one third of women with lupus have antibodies that may cause lupus-like symptoms in their baby once it‘s born. This is known as neonatal lupus. Symptoms may include skin rash, unusual blood count and, rarely, heartbeat irregularities. This is not SLE.
In babies who don’t experience heartbeat irregularities, all symptoms of neonatal lupus usually resolve by three to six months of age. Heartbeat irregularities can be successfully treated.
Lupus and pregnancy delay advice
Some women with lupus should delay pregnancy and discuss their plan with their treating doctor when they are planning to have a baby.
They include:
women whose lupus is active
women taking medication such as methotrexate, mycophenolate, or cyclophosphamide
women with kidney disease
women with previous thrombosis or miscarriage.
If you have any questions about your condition, medications and pregnancy, talk with your doctor.
Where to get help
Your GP (doctor)
Obstetrician
A specialist (often a rheumatologist, nephrologist, immunologist or dermatologist)
https://www.betterhealth.vic.gov.au/health/conditionsandtreatments/lupus-and-pregnancy |
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | Compare the main types of financial arbitrage, and compare their advantages and disadvantages. Which type of arbitrage would be most suitable for a retail investors with a moderate amount of capital? | 2. Merger Arbitrage
Merger arbitrage is an investing strategy that capitalizes on the difference in price between the target company’s stock price and the price offered by the acquirer in a merger or acquirement.
The differences between merger arbitrage and other types of arbitrage lie in the potential risks and rewards associated with the transaction. Merger arbitrage is less risky than other forms of arbitrage due to the long-term nature of the transaction and the ability to hedge some of the risks associated with the acquisition.
Merger arbitrage provides a high potential return with relatively low risk. It is also a relatively low-cost strategy and does not require the trader to take on a large amount of leverage.
Pros of merger arbitrage include the fact that investors capitalizes on the difference in price between the target company’s stock price and the price offered by the acquirer, as well as the potential for a high return on investment.
Cons of merger arbitrage include the fact that there is a great deal of uncertainty surrounding the transaction and the potential for the deal to fall through. This leads to a loss of capital for the investor.
An example of merger arbitrage is if a company announces a merger with another company, and the target company’s stock price jumps above the price offered by the acquirer. An investor could purchase stock in the target company and hold it until the acquisition was completed, thereby capitalizing on the price difference.
3. Convertible Arbitrage
Convertible arbitrage is an investment strategy where an investor will purchase a convertible bond and simultaneously sell short the stock into which the convertible are converted. Convertible arbitrage’s idea is that the investor profits from a discrepancy in the convertible arbitrage spread.
Convertible arbitrage’s biggest advantage is that it offers investors an opportunity for additional profits and helps reduce market risk by diversifying across different asset classes. Convertible arbitrage strategies have historically experienced lower volatility than traditional equity strategies.
The main disadvantage of convertible arbitrage is that it involves riskier activities than traditional arbitrage. It involves taking on the stock and the convertible bond risk. The liquidity risk of the underlying securities could be quite high.
4. Risk Arbitrage
Risk arbitrage is an investment strategy that seeks to take advantage of price discrepancies between related securities, often caused by corporate events such as mergers, restructurings, and takeover bids. Risk arbitrage involves buying the undervalued security and selling the overvalued security, with the expectation that the prices will converge as the corporate events unfold.
The main difference between risk arbitrage and other forms of arbitrage is that it involves taking a short-term risk, as there is a possibility that the arbitrageur will not be able to close out the positions prior to the prices converging. This could either result in a loss or a gain, depending on the direction and magnitude of the price movements.
The main advantage of risk arbitrage is the potential to earn high returns in a short period of time. Arbitrageurs are able to take advantage of price discrepancies that exist in the market, and if the prices converge as expected, large profits are realized.
The main disadvantage of risk arbitrage is that it involves taking a short-term risk. The arbitrageur could incur losses if the prices do not move in the expected direction or magnitude, In addition, risk arbitrage is time-sensitive, and the arbitrageur needs to be able to close out the positions prior to the prices converging in order to take advantage of the mispricing.
An example of risk arbitrage is the acquisition of a company by another company. If the market prices of the target company are lower than the offer price, the arbitrageur buy shares of the target company and short-sells shares of the acquiring company. If the market prices of the target company converge to the offer price, the arbitrageur closes out the positions and earns a profit.
5. Dividend Arbitrage
Dividend arbitrage is a form of arbitrage that involves taking advantage of the difference in share prices before and after the ex-dividend date. The dividend arbitrage strategy involves buying the stock before the ex-dividend date and then selling it on the same day at a higher price. This allows investors to capitalize on the difference in share prices without directly engaging in the stock market.
The difference between dividend arbitrage and other forms of arbitrage is that, in the case of dividend arbitrage, investors are taking advantage of the difference in share prices before and after the ex-dividend date. Other forms of arbitrage involve taking advantage of pricing discrepancies in different markets.
The main advantage of dividend arbitrage is that it allows investors to capitalize on the difference in share prices without directly engaging in the stock market. This benefits investors who need more time or resources to actively trade in the stock market.
The main disadvantage of dividend arbitrage is that it requires investors to buy the stock before the ex-dividend date. This means that there is a risk that the stock price could fall significantly before the ex-dividend date, resulting in a loss for the investor.
For example, if an investor buys a stock for Rs. 50 per share before the ex-dividend date and sells it for Rs. 55 per share on the same day, the investor will make a profit of Rs. 5 per share. This profit is made without having to actively engage in the stock market.
6. Futures Arbitrage
Futures Arbitrage is a strategy that involves taking advantage of discrepancies in pricing between two different markets for a fututes instrument. Futures arbitrage involves buying the futures in one market at a lower price and selling it in another at a higher price, thus making a profit.
The main difference between Futures Arbitrage and other arbitrage strategies is that Futures Arbitrage involves taking advantage of discrepancies in the prices of futures contracts. Other arbitrage strategies involve taking advantage of discrepancies between two or more different types of securities.
Pros of Futures Arbitrage include the potential for high returns in a relatively short period and the ability to capitalize on discrepancies in market prices without possessing the underlying instrument.
Cons of Futures Arbitrage include the high risk associated with this strategy and the fact that it requires a good understanding of the markets and the instruments being traded.
An example of Futures Arbitrage would be buying a gold futures contract in the US and selling the same contract in India at a higher price, thus making a profit.
7. Pure Arbitrage
Pure arbitrage is taking advantage of a price difference between two or more markets to make a risk-free profit. Pure arbitrage involves simultaneously buying and selling the same financial asset, commodity, or currency in different markets to take advantage of the price difference.
The main advantage of pure arbitrage is that it is a low-risk strategy. Since the investor is simultaneously buying and selling the same asset, at least one of their orders is guaranteed to be profitable.
The main disadvantage of pure arbitrage is that it is a complex and time-consuming process. It requires access to multiple markets and acting quickly to take advantage of the price discrepancies before they disappear.
For example, an investor notices that gold prices are higher in New York than in London. The investor buys gold in London and then simultaneously sells it in New York to take advantage of the price discrepancy and make a risk-free profit. | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
Compare the main types of financial arbitrage, and compare their advantages and disadvantages. Which type of arbitrage would be most suitable for a retail investors with a moderate amount of capital?
<TEXT>
2. Merger Arbitrage
Merger arbitrage is an investing strategy that capitalizes on the difference in price between the target company’s stock price and the price offered by the acquirer in a merger or acquirement.
The differences between merger arbitrage and other types of arbitrage lie in the potential risks and rewards associated with the transaction. Merger arbitrage is less risky than other forms of arbitrage due to the long-term nature of the transaction and the ability to hedge some of the risks associated with the acquisition.
Merger arbitrage provides a high potential return with relatively low risk. It is also a relatively low-cost strategy and does not require the trader to take on a large amount of leverage.
Pros of merger arbitrage include the fact that investors capitalizes on the difference in price between the target company’s stock price and the price offered by the acquirer, as well as the potential for a high return on investment.
Cons of merger arbitrage include the fact that there is a great deal of uncertainty surrounding the transaction and the potential for the deal to fall through. This leads to a loss of capital for the investor.
An example of merger arbitrage is if a company announces a merger with another company, and the target company’s stock price jumps above the price offered by the acquirer. An investor could purchase stock in the target company and hold it until the acquisition was completed, thereby capitalizing on the price difference.
3. Convertible Arbitrage
Convertible arbitrage is an investment strategy where an investor will purchase a convertible bond and simultaneously sell short the stock into which the convertible are converted. Convertible arbitrage’s idea is that the investor profits from a discrepancy in the convertible arbitrage spread.
Convertible arbitrage’s biggest advantage is that it offers investors an opportunity for additional profits and helps reduce market risk by diversifying across different asset classes. Convertible arbitrage strategies have historically experienced lower volatility than traditional equity strategies.
The main disadvantage of convertible arbitrage is that it involves riskier activities than traditional arbitrage. It involves taking on the stock and the convertible bond risk. The liquidity risk of the underlying securities could be quite high.
4. Risk Arbitrage
Risk arbitrage is an investment strategy that seeks to take advantage of price discrepancies between related securities, often caused by corporate events such as mergers, restructurings, and takeover bids. Risk arbitrage involves buying the undervalued security and selling the overvalued security, with the expectation that the prices will converge as the corporate events unfold.
The main difference between risk arbitrage and other forms of arbitrage is that it involves taking a short-term risk, as there is a possibility that the arbitrageur will not be able to close out the positions prior to the prices converging. This could either result in a loss or a gain, depending on the direction and magnitude of the price movements.
The main advantage of risk arbitrage is the potential to earn high returns in a short period of time. Arbitrageurs are able to take advantage of price discrepancies that exist in the market, and if the prices converge as expected, large profits are realized.
The main disadvantage of risk arbitrage is that it involves taking a short-term risk. The arbitrageur could incur losses if the prices do not move in the expected direction or magnitude, In addition, risk arbitrage is time-sensitive, and the arbitrageur needs to be able to close out the positions prior to the prices converging in order to take advantage of the mispricing.
An example of risk arbitrage is the acquisition of a company by another company. If the market prices of the target company are lower than the offer price, the arbitrageur buy shares of the target company and short-sells shares of the acquiring company. If the market prices of the target company converge to the offer price, the arbitrageur closes out the positions and earns a profit.
5. Dividend Arbitrage
Dividend arbitrage is a form of arbitrage that involves taking advantage of the difference in share prices before and after the ex-dividend date. The dividend arbitrage strategy involves buying the stock before the ex-dividend date and then selling it on the same day at a higher price. This allows investors to capitalize on the difference in share prices without directly engaging in the stock market.
The difference between dividend arbitrage and other forms of arbitrage is that, in the case of dividend arbitrage, investors are taking advantage of the difference in share prices before and after the ex-dividend date. Other forms of arbitrage involve taking advantage of pricing discrepancies in different markets.
The main advantage of dividend arbitrage is that it allows investors to capitalize on the difference in share prices without directly engaging in the stock market. This benefits investors who need more time or resources to actively trade in the stock market.
The main disadvantage of dividend arbitrage is that it requires investors to buy the stock before the ex-dividend date. This means that there is a risk that the stock price could fall significantly before the ex-dividend date, resulting in a loss for the investor.
For example, if an investor buys a stock for Rs. 50 per share before the ex-dividend date and sells it for Rs. 55 per share on the same day, the investor will make a profit of Rs. 5 per share. This profit is made without having to actively engage in the stock market.
6. Futures Arbitrage
Futures Arbitrage is a strategy that involves taking advantage of discrepancies in pricing between two different markets for a fututes instrument. Futures arbitrage involves buying the futures in one market at a lower price and selling it in another at a higher price, thus making a profit.
The main difference between Futures Arbitrage and other arbitrage strategies is that Futures Arbitrage involves taking advantage of discrepancies in the prices of futures contracts. Other arbitrage strategies involve taking advantage of discrepancies between two or more different types of securities.
Pros of Futures Arbitrage include the potential for high returns in a relatively short period and the ability to capitalize on discrepancies in market prices without possessing the underlying instrument.
Cons of Futures Arbitrage include the high risk associated with this strategy and the fact that it requires a good understanding of the markets and the instruments being traded.
An example of Futures Arbitrage would be buying a gold futures contract in the US and selling the same contract in India at a higher price, thus making a profit.
7. Pure Arbitrage
Pure arbitrage is taking advantage of a price difference between two or more markets to make a risk-free profit. Pure arbitrage involves simultaneously buying and selling the same financial asset, commodity, or currency in different markets to take advantage of the price difference.
The main advantage of pure arbitrage is that it is a low-risk strategy. Since the investor is simultaneously buying and selling the same asset, at least one of their orders is guaranteed to be profitable.
The main disadvantage of pure arbitrage is that it is a complex and time-consuming process. It requires access to multiple markets and acting quickly to take advantage of the price discrepancies before they disappear.
For example, an investor notices that gold prices are higher in New York than in London. The investor buys gold in London and then simultaneously sells it in New York to take advantage of the price discrepancy and make a risk-free profit.
https://www.strike.money/stock-market/arbitrage |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | discuss wether the fine tuning of LLMs on medical datasets has consistently improved risk prediction performance for Alzheimer’s disease using ĖHRs. Discuss the specific methods proposed for handling some of the different setbacks is prediction accuracy. | 1Introduction
Alzheimer’s disease (AD) and Alzheimer’s disease related dementias (ADRD) are neurodegenerative disorders primarily affecting memory and cognitive functions. They gradually erode overall function abilities, eventually leading to death [39]. The development of AD/ADRD treatment has been slow due to the complex disease pathology and clinical manifestations. The decline of memory and cognitive functions is associated with pathological progression and structural changes of the brain [28], which can be identified from neuroimage or biomarkers from cerebro-spinal fluid. However, those procedures are expensive and invasive, which are unlikely to be ordered for asymptomatic patients. For real world patients, typically only the electronic health records (EHRs) collected from their routined care are available[6, 18]. These data include information like demographics, lab tests, diagnoses, medications, and procedures, and they provide a potential opportunity for risk prediction of AD/ADRD [34].
Risk prediction from EHRs is commonly formulated as a supervised learning problem [56] and one can model with existing supervised learning (SLs) tools, such as logistic regression (LR) [68], XGBoost (XGB) [44], and multi-layer perceptron (MLP) [54]. However, SL approaches face significant challenges in predicting risk from EHRs, due to the complexity of medical problems and the noisy nature of the data [75]. Moreover, EHRs do not contain all critical information that is needed for risk prediction for particular conditions. For example, diagnosis of MCI requires a comprehensive evaluation of cognitive functions, such as memory, executive function, and language. In early stages, when symptoms are subtle and not extensively documented in the EHRs, risk
| Type | Vital sign | Lab. Test | ICD | RxNorm | CPT |
|------|------------|-----------|-----|--------|-----|
| Domain | ℝ | ℝ | [0,1] (Positive, Negative) | [0,1] (Positive, Negative) | [0,1] (Positive, Negative) |
| Example | Blood Pressure, Age | Hemoglobin Level | J18.9 for Pneumonia | 4099 for Estrogen | A4206 for DME and supplies |
| Short Explanation | Physiological measurement to assess a patient's status | Analyzing biochemical markers using blood and urine | Alphanumeric system classifying diseases | Standardized nomenclature system for clinical drugs | Medical procedure identification for billing |
Table 1: Brief explanation of the five categories; Vital sign, Laboratory test, ICD code, RxNorm code, and CPT code, in the EHR dataset, describing each patient.
prediction using traditional machine-learning approaches can be difficult. Though some information in EHRs may be weakly related to the risk, SL models may or may not be able to pick them up.
Recent advancements in pre-trained large language models (LLMs) [61, 62, 1, 8, 58] have demonstrated their capability to provide robust reasoning power, particularly with rich contextual information and domain knowledge. Intuitively, LLM can leverage its reasoning capability and flexible in-context learning (ICL) strategies to better derive valuable insights from EHRs. However, there are still several technical challenges to achieve this goal. The first one is how to perform effective reasoning with an EHR database. While fine-tuning external knowledge into the LLMs has been a major approach in many domains, it is not trivial to fine-tune knowledge from EHR to LLMs. EHR includes clinical information for individual patients and evolves over time, whereas LLMs are typically learned and tuned using static information. The second challenge is the representation of medical records for reasoning. LLMs are probability models trained to understand and reason with natural language, and it is not clear how structured EHRs, such as vital, diagnosis codes, and prescriptions, are best represented in LLMs for effective reasoning. The third challenge is rooted in the inherent data quality issues in EHR data, which could be noisy as they were originally designed for billing purposes. The presence of such events is likely to compromise and greatly mislead the reasoning of LLMs.
Contributions. Here, we summarize the contributions as follows:
- We identified the strengths and weaknesses of SLs and LLMs in risk predictions from EHR. From the SLs’ perspective, they provide accurate predictions for confident samples, which are typically aligned well with training data distribution. However, when the samples are not common or the features are sparse, SLs are usually not confident about the predictions and generate poorer predictions than LLMs, showing the value of reasoning from LLMs in EHR analysis.
• Based on our findings, we propose a collaborative approach that combines SLs and LLMs through a confidence- driven selection process for enhanced ADRD risk prediction. This method dynamically selects between SL and LLM predictions based on confidence levels, effectively leveraging the strengths of SLs for high-confidence cases and LLMs for low-confidence instances. Furthermore, we incorporate a meticulously designed ICL demonstration denoising strategy to save the ICL performance of LLMs, which in turn boosts the overall efficiency of the pipeline.
• We validate our approach using a real-world dataset from the OHSU health system, highlighting the effectiveness of our method and its superiority over traditional SLs and LLMs in predicting ADRD. Additionally, we conduct experiments with different sizes of LLMs and models fine-tuned on various medical datasets. Our findings suggest that neither a larger model size nor fine-tuning on medical data consistently improves risk prediction performance. Further investigation is required to check these dynamics in practice.
LLMs for Clinical Domain
LLMs possess strong capability in performing various tasks, including those in the medical field [23]. In particular, many studies have attempted to develop new LLMs specifically for medical tasks. For example, Med-PaLM [55] represents a medical domain-specific variant of the PaLM model. Similarly, based on Alpaca [57], MedAlpaca [21] was proposed, and fine-tuend LLaMA [61, 62] for medical domain, PMC-LLaMA [67] was suggested. Chat-bot oriented model [70] and Huatuo-GPT [71] were trained using the dataset obtained from the real-world doctors and ChatGPT [1]. Yang et al. [69] trained and release the GatorTron model. Different from proposing a new medical-specific models, several works have aimed to directly use the pre-trained LLMs in a zero-shot manner. For example in [42, 38] used GPT models for the medical field. Nori et al. [43] proposed a way of leveraging pre-trained LLMs for the medical field by leveraging some techniques including in-context learning, and chain-of-thought. | [question]
discuss wether the fine tuning of LLMs on medical datasets has consistently improved risk prediction performance for Alzheimer’s disease using ĖHRs. Discuss the specific methods proposed for handling some of the different setbacks is prediction accuracy.
=====================
[text]
1Introduction
Alzheimer’s disease (AD) and Alzheimer’s disease related dementias (ADRD) are neurodegenerative disorders primarily affecting memory and cognitive functions. They gradually erode overall function abilities, eventually leading to death [39]. The development of AD/ADRD treatment has been slow due to the complex disease pathology and clinical manifestations. The decline of memory and cognitive functions is associated with pathological progression and structural changes of the brain [28], which can be identified from neuroimage or biomarkers from cerebro-spinal fluid. However, those procedures are expensive and invasive, which are unlikely to be ordered for asymptomatic patients. For real world patients, typically only the electronic health records (EHRs) collected from their routined care are available[6, 18]. These data include information like demographics, lab tests, diagnoses, medications, and procedures, and they provide a potential opportunity for risk prediction of AD/ADRD [34].
Risk prediction from EHRs is commonly formulated as a supervised learning problem [56] and one can model with existing supervised learning (SLs) tools, such as logistic regression (LR) [68], XGBoost (XGB) [44], and multi-layer perceptron (MLP) [54]. However, SL approaches face significant challenges in predicting risk from EHRs, due to the complexity of medical problems and the noisy nature of the data [75]. Moreover, EHRs do not contain all critical information that is needed for risk prediction for particular conditions. For example, diagnosis of MCI requires a comprehensive evaluation of cognitive functions, such as memory, executive function, and language. In early stages, when symptoms are subtle and not extensively documented in the EHRs, risk
| Type | Vital sign | Lab. Test | ICD | RxNorm | CPT |
|------|------------|-----------|-----|--------|-----|
| Domain | ℝ | ℝ | [0,1] (Positive, Negative) | [0,1] (Positive, Negative) | [0,1] (Positive, Negative) |
| Example | Blood Pressure, Age | Hemoglobin Level | J18.9 for Pneumonia | 4099 for Estrogen | A4206 for DME and supplies |
| Short Explanation | Physiological measurement to assess a patient's status | Analyzing biochemical markers using blood and urine | Alphanumeric system classifying diseases | Standardized nomenclature system for clinical drugs | Medical procedure identification for billing |
Table 1: Brief explanation of the five categories; Vital sign, Laboratory test, ICD code, RxNorm code, and CPT code, in the EHR dataset, describing each patient.
prediction using traditional machine-learning approaches can be difficult. Though some information in EHRs may be weakly related to the risk, SL models may or may not be able to pick them up.
Recent advancements in pre-trained large language models (LLMs) [61, 62, 1, 8, 58] have demonstrated their capability to provide robust reasoning power, particularly with rich contextual information and domain knowledge. Intuitively, LLM can leverage its reasoning capability and flexible in-context learning (ICL) strategies to better derive valuable insights from EHRs. However, there are still several technical challenges to achieve this goal. The first one is how to perform effective reasoning with an EHR database. While fine-tuning external knowledge into the LLMs has been a major approach in many domains, it is not trivial to fine-tune knowledge from EHR to LLMs. EHR includes clinical information for individual patients and evolves over time, whereas LLMs are typically learned and tuned using static information. The second challenge is the representation of medical records for reasoning. LLMs are probability models trained to understand and reason with natural language, and it is not clear how structured EHRs, such as vital, diagnosis codes, and prescriptions, are best represented in LLMs for effective reasoning. The third challenge is rooted in the inherent data quality issues in EHR data, which could be noisy as they were originally designed for billing purposes. The presence of such events is likely to compromise and greatly mislead the reasoning of LLMs.
Contributions. Here, we summarize the contributions as follows:
- We identified the strengths and weaknesses of SLs and LLMs in risk predictions from EHR. From the SLs’ perspective, they provide accurate predictions for confident samples, which are typically aligned well with training data distribution. However, when the samples are not common or the features are sparse, SLs are usually not confident about the predictions and generate poorer predictions than LLMs, showing the value of reasoning from LLMs in EHR analysis.
• Based on our findings, we propose a collaborative approach that combines SLs and LLMs through a confidence- driven selection process for enhanced ADRD risk prediction. This method dynamically selects between SL and LLM predictions based on confidence levels, effectively leveraging the strengths of SLs for high-confidence cases and LLMs for low-confidence instances. Furthermore, we incorporate a meticulously designed ICL demonstration denoising strategy to save the ICL performance of LLMs, which in turn boosts the overall efficiency of the pipeline.
• We validate our approach using a real-world dataset from the OHSU health system, highlighting the effectiveness of our method and its superiority over traditional SLs and LLMs in predicting ADRD. Additionally, we conduct experiments with different sizes of LLMs and models fine-tuned on various medical datasets. Our findings suggest that neither a larger model size nor fine-tuning on medical data consistently improves risk prediction performance. Further investigation is required to check these dynamics in practice.
LLMs for Clinical Domain
LLMs possess strong capability in performing various tasks, including those in the medical field [23]. In particular, many studies have attempted to develop new LLMs specifically for medical tasks. For example, Med-PaLM [55] represents a medical domain-specific variant of the PaLM model. Similarly, based on Alpaca [57], MedAlpaca [21] was proposed, and fine-tuend LLaMA [61, 62] for medical domain, PMC-LLaMA [67] was suggested. Chat-bot oriented model [70] and Huatuo-GPT [71] were trained using the dataset obtained from the real-world doctors and ChatGPT [1]. Yang et al. [69] trained and release the GatorTron model. Different from proposing a new medical-specific models, several works have aimed to directly use the pre-trained LLMs in a zero-shot manner. For example in [42, 38] used GPT models for the medical field. Nori et al. [43] proposed a way of leveraging pre-trained LLMs for the medical field by leveraging some techniques including in-context learning, and chain-of-thought.
https://arxiv.org/pdf/2405.16413
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | I'm so confused by this text. How many mice were used in this study? What controls were used to limit bias? Who was the main author and what are their qualifications? Can you give me a list of all authors associated with the University of Copenhagen? list them in an alphabetical, bulleted format. | New research describes for the first time how a spreading wave of disruption and the flow of fluid in the brain triggers headaches, detailing the connection between the neurological symptoms associated with aura and the migraine that follows. The study also identifies new proteins that could be responsible for headaches and may serve as foundation for new migraine drugs.
“In this study, we describe the interaction between the central and peripheral nervous system brought about by increased concentrations of proteins released in the brain during an episode of spreading depolarization, a phenomenon responsible for the aura associated with migraines,” said Maiken Nedergaard, MD, DMSc, co-director of the University of Rochester Center for Translational Neuromedicine and lead author of the new study, which appears in the journal Science. “These findings provide us with a host of new targets to suppress sensory nerve activation to prevent and treat migraines and strengthen existing therapies.”
"These findings provide us with a host of new targets to suppress sensory nerve activation to prevent and treat migraines and strengthen existing therapies.”
Maiken Nedergaard, MD, DMSc
It is estimated that one out of 10 people experience migraines and in about a quarter of these cases the headache is preceded by an aura, a sensory disturbance that can includes light flashes, blind spots, double vision, and tingling sensations or limb numbness. These symptoms typically appear five to 60 minutes prior to the headache.
The cause of the aura is a phenomenon called cortical spreading depression, a temporary depolarization of neurons and other cells caused by diffusion of glutamate and potassium that radiates like a wave across the brain, reducing oxygen levels and impairing blood flow. Most frequently, the depolarization event is located in the visual processing center of the brain cortex, hence the visual symptoms that first herald a coming headache.
While migraines auras arise in the brain, the organ itself cannot sense pain. These signals must instead be transmitted from the central nervous system—the brain and spinal cord—to the peripheral nervous system, the communication network that transmits information between brain with the rest of the body and includes sensory nerves responsible for sending information such as touch and pain. The process of communication between the brain and peripheral sensory nerves in migraines has largely remained a mystery.
Fluid Dynamics Models Shed Light on Migraine Pain Origins
Nedergaard and her colleagues at the University of Rochester and the University of Copenhagen are pioneers in understanding the flow of fluids in the brain. In 2012, her lab was the first to describe the glymphatic system, which uses cerebrospinal fluid (CSF) to wash away toxic proteins in the brain. In partnership with experts in fluid dynamics, the team has built detailed models of how the CSF moves in the brain and its role in transporting proteins, neurotransmitters, and other chemicals.
The most widely accepted theory is that nerve endings resting on the outer surface of the membranes that enclose the brain are responsible for the headaches that follow an aura. The new study, which was conducted in mice, describes a different route and identifies proteins, many of which are potential new drug targets, that may be responsible for activating the nerves and causing pain.
As the depolarization wave spreads, neurons release a host of inflammatory and other proteins into CSF. In a series of experiments in mice, the researchers showed how CSF transports these proteins to the trigeminal ganglion, a large bundle of nerves that rests at the base of the skull and supplies sensory information to the head and face.
It was assumed that the trigeminal ganglion, like the rest of the peripheral nervous system, rested outside the blood-brain-barrier, which tightly controls what molecules enter and leave the brain. However, the researchers identified a previously unknown gap in the barrier that allowed CSF to flow directly into the trigeminal ganglion, exposing sensory nerves to the cocktail of proteins released by the brain.
Migraine-Associated Proteins Double During Brain Wave Activity
model_image
Analyzing the molecules, the researchers identified twelve proteins called ligands that bind with receptors on sensory nerves found in the trigeminal ganglion, potentially causing these cells to activate. The concentrations of several of these proteins found in CSF more than doubled following a cortical spreading depression. One of the proteins, calcitonin gene-related peptide (CGRP), is already the target of a new class of drugs to treat and prevent migraines called CGRP inhibitors. Other identified proteins are known to play a role in other pain conditions, such as neuropathic pain, and are likely important in migraine headaches as well.
“We have identified a new signaling pathway and several molecules that activate sensory nerves in the peripheral nervous system. Among the identified molecules are those already associated with migraines, but we didn't know exactly how and where the migraine inducing action occurred,” said Martin Kaag Rasmussen, PhD, a postdoctoral fellow at the University of Copenhagen and first author of the study. “Defining the role of these newly identified ligand-receptor pairs may enable the discovery of new pharmacological targets, which could benefit the large portion of patients not responding to available therapies.”
The researchers also observed that the transport of proteins released in one side of the brain reaches mostly the nerves in the trigeminal ganglion on the same side, potentially explaining why pain occurs on one side of the head during most migraines.
Additional co-authors Kjeld Mollgard, Peter Bork, Pia Weikop, Tina Esmail, Lylia Drici, Nicolai Albrechtsen, Matthias Mann, Yuki Mori, and Jonathan Carlsen with the University of Copenhagen, Nguyen Huynh and Steve Goldman with URMC, and Nima Ghitani and Alexander Chesler with the National Institute of Neurological Disorders and Stroke (NINDS). The research was supported with funding from the Novo Nordisk Foundation, NINDS, the US Army Research Office, the Lundbeck Foundation, and the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation. | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
I'm so confused by this text. How many mice were used in this study? What controls were used to limit bias? Who was the main author and what are their qualifications? Can you give me a list of all authors associated with the University of Copenhagen? list them in an alphabetical, bulleted format.
New research describes for the first time how a spreading wave of disruption and the flow of fluid in the brain triggers headaches, detailing the connection between the neurological symptoms associated with aura and the migraine that follows. The study also identifies new proteins that could be responsible for headaches and may serve as foundation for new migraine drugs.
“In this study, we describe the interaction between the central and peripheral nervous system brought about by increased concentrations of proteins released in the brain during an episode of spreading depolarization, a phenomenon responsible for the aura associated with migraines,” said Maiken Nedergaard, MD, DMSc, co-director of the University of Rochester Center for Translational Neuromedicine and lead author of the new study, which appears in the journal Science. “These findings provide us with a host of new targets to suppress sensory nerve activation to prevent and treat migraines and strengthen existing therapies.”
"These findings provide us with a host of new targets to suppress sensory nerve activation to prevent and treat migraines and strengthen existing therapies.”
Maiken Nedergaard, MD, DMSc
It is estimated that one out of 10 people experience migraines and in about a quarter of these cases the headache is preceded by an aura, a sensory disturbance that can includes light flashes, blind spots, double vision, and tingling sensations or limb numbness. These symptoms typically appear five to 60 minutes prior to the headache.
The cause of the aura is a phenomenon called cortical spreading depression, a temporary depolarization of neurons and other cells caused by diffusion of glutamate and potassium that radiates like a wave across the brain, reducing oxygen levels and impairing blood flow. Most frequently, the depolarization event is located in the visual processing center of the brain cortex, hence the visual symptoms that first herald a coming headache.
While migraines auras arise in the brain, the organ itself cannot sense pain. These signals must instead be transmitted from the central nervous system—the brain and spinal cord—to the peripheral nervous system, the communication network that transmits information between brain with the rest of the body and includes sensory nerves responsible for sending information such as touch and pain. The process of communication between the brain and peripheral sensory nerves in migraines has largely remained a mystery.
Fluid Dynamics Models Shed Light on Migraine Pain Origins
Nedergaard and her colleagues at the University of Rochester and the University of Copenhagen are pioneers in understanding the flow of fluids in the brain. In 2012, her lab was the first to describe the glymphatic system, which uses cerebrospinal fluid (CSF) to wash away toxic proteins in the brain. In partnership with experts in fluid dynamics, the team has built detailed models of how the CSF moves in the brain and its role in transporting proteins, neurotransmitters, and other chemicals.
The most widely accepted theory is that nerve endings resting on the outer surface of the membranes that enclose the brain are responsible for the headaches that follow an aura. The new study, which was conducted in mice, describes a different route and identifies proteins, many of which are potential new drug targets, that may be responsible for activating the nerves and causing pain.
As the depolarization wave spreads, neurons release a host of inflammatory and other proteins into CSF. In a series of experiments in mice, the researchers showed how CSF transports these proteins to the trigeminal ganglion, a large bundle of nerves that rests at the base of the skull and supplies sensory information to the head and face.
It was assumed that the trigeminal ganglion, like the rest of the peripheral nervous system, rested outside the blood-brain-barrier, which tightly controls what molecules enter and leave the brain. However, the researchers identified a previously unknown gap in the barrier that allowed CSF to flow directly into the trigeminal ganglion, exposing sensory nerves to the cocktail of proteins released by the brain.
Migraine-Associated Proteins Double During Brain Wave Activity
model_image
Analyzing the molecules, the researchers identified twelve proteins called ligands that bind with receptors on sensory nerves found in the trigeminal ganglion, potentially causing these cells to activate. The concentrations of several of these proteins found in CSF more than doubled following a cortical spreading depression. One of the proteins, calcitonin gene-related peptide (CGRP), is already the target of a new class of drugs to treat and prevent migraines called CGRP inhibitors. Other identified proteins are known to play a role in other pain conditions, such as neuropathic pain, and are likely important in migraine headaches as well.
“We have identified a new signaling pathway and several molecules that activate sensory nerves in the peripheral nervous system. Among the identified molecules are those already associated with migraines, but we didn't know exactly how and where the migraine inducing action occurred,” said Martin Kaag Rasmussen, PhD, a postdoctoral fellow at the University of Copenhagen and first author of the study. “Defining the role of these newly identified ligand-receptor pairs may enable the discovery of new pharmacological targets, which could benefit the large portion of patients not responding to available therapies.”
The researchers also observed that the transport of proteins released in one side of the brain reaches mostly the nerves in the trigeminal ganglion on the same side, potentially explaining why pain occurs on one side of the head during most migraines.
Additional co-authors Kjeld Mollgard, Peter Bork, Pia Weikop, Tina Esmail, Lylia Drici, Nicolai Albrechtsen, Matthias Mann, Yuki Mori, and Jonathan Carlsen with the University of Copenhagen, Nguyen Huynh and Steve Goldman with URMC, and Nima Ghitani and Alexander Chesler with the National Institute of Neurological Disorders and Stroke (NINDS). The research was supported with funding from the Novo Nordisk Foundation, NINDS, the US Army Research Office, the Lundbeck Foundation, and the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation.
https://www.urmc.rochester.edu/news/story/study-reveals-brain-fluid-dynamics-as-key-to-migraine-mysteries-new-therapies |
You may only respond to the prompt using information provided in the context block. Provide your answer using a bulleted list. | What was decided in Robinson v. California? | The Eighth Amendment provides, “Excessive bail shall not be required, nor excessive fines imposed, nor
cruel and unusual punishments inflicted.” The Eighth Amendment’s prohibitions apply to laws enacted by
the federal government, and by state governments and their local subdivisions by operation of the
Fourteenth Amendment.
The Supreme Court has interpreted the Amendment’s prohibition on cruel and unusual punishments to,
among other things, impose some “substantive limits on what the government may criminalize.” Before
Grants Pass, the Supreme Court had issued two primary cases elaborating on the Eighth Amendment’s
substantive limits on what a government may criminalize: Robinson v. California and Powell v. Texas.
In Robinson v. California, a 1962 case, the Court heard an Eighth Amendment challenge to a California
law that made it a misdemeanor offense for an individual to “be addicted to the use of narcotics.” The
defendant was convicted under the law; however, at the time of his arrest, he “was neither under the
influence of narcotics nor suffering withdrawal symptoms.” The Supreme Court reversed the conviction
and expressed concern that the defendant was convicted on the basis of his “status,” specifically that he
suffered from the “chronic condition . . . of [being] addicted to the use of narcotics.” Put differently, the
Court was troubled that the defendant was not convicted “upon proof of the actual use of narcotics.” The
majority thus ruled that, under the Eighth Amendment, an individual may not be punished for a status or
in the absence of some conduct (or “actus reus”).
Six years after Robinson, the Court issued its opinion in Powell v. Texas, a case involving an Eighth
Amendment challenge to a law that proscribed public intoxication. The defendant argued that because he
was a chronic alcoholic, being intoxicated in public was “not of his own volition.” While the case
produced multiple opinions, the plurality determined that “Texas has sought to punish not for a status, as
California did in Robinson,” but rather “for public behavior which may create substantial health and
safety hazards, both for [the defendant] and for members of the general public.” That is, the plurality
indicated that the law at issue criminalized conduct, not status, which it viewed as permissible.
In his concurring opinion in Powell, Justice Black stated that Robinson established a status-conduct
distinction, forbidding punishment when the individual has not committed a “wrongful act.” Justice White
also concurred in the result. Citing Robinson, Justice White opined that “[i]f it cannot be a crime to have
an irresistible compulsion to use narcotics, I do not see how it can constitutionally be a crime to yield to
such a compulsion.” He thus suggested that an individual may not be punished for conduct symptomatic
of or compelled by an addiction. Justice White, however, concluded that the record did not support a
finding that the defendant could not avoid being in public while intoxicated. Accordingly, Justice White
was not prepared to “say that the chronic alcoholic who proves his disease and a compulsion to drink is
shielded from conviction [for] the [additional] act of going to or remaining in a public place.”
Four Justices dissented. They contended that the defendant was “powerless” to drink, had an
“uncontrollable compulsion to drink to the point of intoxication,” and that once in this state “he could not
prevent himself from appearing in public places.” In other words, they suggested that, here, drinking and
appearing in public were both involuntary acts making criminal punishment inappropriate. | You may only respond to the prompt using information provided in the context block. Provide your answer using a bulleted list.
What was decided in Robinson v. California?
The Eighth Amendment provides, “Excessive bail shall not be required, nor excessive fines imposed, nor
cruel and unusual punishments inflicted.” The Eighth Amendment’s prohibitions apply to laws enacted by
the federal government, and by state governments and their local subdivisions by operation of the
Fourteenth Amendment.
The Supreme Court has interpreted the Amendment’s prohibition on cruel and unusual punishments to,
among other things, impose some “substantive limits on what the government may criminalize.” Before
Grants Pass, the Supreme Court had issued two primary cases elaborating on the Eighth Amendment’s
substantive limits on what a government may criminalize: Robinson v. California and Powell v. Texas.
In Robinson v. California, a 1962 case, the Court heard an Eighth Amendment challenge to a California
law that made it a misdemeanor offense for an individual to “be addicted to the use of narcotics.” The
defendant was convicted under the law; however, at the time of his arrest, he “was neither under the
influence of narcotics nor suffering withdrawal symptoms.” The Supreme Court reversed the conviction
and expressed concern that the defendant was convicted on the basis of his “status,” specifically that he
suffered from the “chronic condition . . . of [being] addicted to the use of narcotics.” Put differently, the
Court was troubled that the defendant was not convicted “upon proof of the actual use of narcotics.” The
majority thus ruled that, under the Eighth Amendment, an individual may not be punished for a status or
in the absence of some conduct (or “actus reus”).
Six years after Robinson, the Court issued its opinion in Powell v. Texas, a case involving an Eighth
Amendment challenge to a law that proscribed public intoxication. The defendant argued that because he
was a chronic alcoholic, being intoxicated in public was “not of his own volition.” While the case
produced multiple opinions, the plurality determined that “Texas has sought to punish not for a status, as
California did in Robinson,” but rather “for public behavior which may create substantial health and
safety hazards, both for [the defendant] and for members of the general public.” That is, the plurality
indicated that the law at issue criminalized conduct, not status, which it viewed as permissible.
In his concurring opinion in Powell, Justice Black stated that Robinson established a status-conduct
distinction, forbidding punishment when the individual has not committed a “wrongful act.” Justice White
also concurred in the result. Citing Robinson, Justice White opined that “[i]f it cannot be a crime to have
an irresistible compulsion to use narcotics, I do not see how it can constitutionally be a crime to yield to
such a compulsion.” He thus suggested that an individual may not be punished for conduct symptomatic
of or compelled by an addiction. Justice White, however, concluded that the record did not support a
finding that the defendant could not avoid being in public while intoxicated. Accordingly, Justice White
was not prepared to “say that the chronic alcoholic who proves his disease and a compulsion to drink is
shielded from conviction [for] the [additional] act of going to or remaining in a public place.”
Four Justices dissented. They contended that the defendant was “powerless” to drink, had an
“uncontrollable compulsion to drink to the point of intoxication,” and that once in this state “he could not
prevent himself from appearing in public places.” In other words, they suggested that, here, drinking and
appearing in public were both involuntary acts making criminal punishment inappropriate. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | Consider the increasing use of edge computing across a range of sectors, including network optimization, agriculture, and manufacturing. In particular how, how can edge computing handle constraints on bandwidth, latency, and data sovereignty that come with typical centralized data centers? Talk about how edge computing is especially well-suited to real-time, data-intensive applications like worker safety is hazardous or remote environments and autonomous cars. What are the main elements that, in these situations, make computing essential? | Data is the lifeblood of modern business, providing valuable business insight and supporting real-time control over critical business processes and operations. Today's businesses are awash in an ocean of data, and huge amounts of data can be routinely collected from sensors and IoT devices operating in real time from remote locations and inhospitable operating environments almost anywhere in the world.
But this virtual flood of data is also changing the way businesses handle computing. The traditional computing paradigm built on a centralized data center and everyday internet isn't well suited to moving endlessly growing rivers of real-world data. Bandwidth limitations, latency issues and unpredictable network disruptions can all conspire to impair such efforts. Businesses are responding to these data challenges through the use of edge computing architecture.
In simplest terms, edge computing moves some portion of storage and compute resources out of the central data center and closer to the source of the data itself. Rather than transmitting raw data to a central data center for processing and analysis, that work is instead performed where the data is actually generated -
Thus, edge computing is reshaping IT and business computing. Take a comprehensive look at what edge computing is, how it works, the influence of the cloud, edge use cases, tradeoffs and implementation considerations.
Edge computing uses
Edge computing brings data processing closer to the data source.
How does edge computing work?
Edge computing is all a matter of location. In traditional enterprise computing, data is produced at a client endpoint, such as a user's computer.
But the number of devices connected to the internet, and the volume of data being produced by those devices and used by businesses, is growing far too quickly for traditional data center infrastructures to accommodate.
So IT architects have shifted focus from the central data center to the logical edge of the infrastructure -- taking storage and computing resources from the data center and moving those resources to the point where the data is generated.
Edge computing adoption
Although only 27% of respondents have already implemented edge computing technologies, 54% find the idea interesting.
Edge computing puts storage and servers where the data is, often requiring little more than a partial rack of gear to operate on the remote LAN to collect and process the data locally.
The idea of business intelligence can vary dramatically. Some examples include retail environments where video surveillance of the showroom floor might be combined with actual sales data to determine the most desirable product configuration or consumer demand.
Edge vs. cloud vs. fog computing
Edge computing is closely associated with the concepts of cloud computing and fog computing. Although there is some overlap between these concepts, they aren't the same thing, and generally shouldn't be used interchangeably. It's helpful to compare the concepts and understand their differences.
One of the easiest ways to understand the differences between edge, cloud and fog computing is to highlight their common theme: All three concepts relate to distributed computing and focus on the physical deployment of compute and storage resources in relation to the data that is being produced. The difference is a matter of where those resources are located.
Edge computing vs. cloud
Compare edge cloud, cloud computing and edge computing to determine which model is best for you.
Edge. Edge computing is the deployment of computing and storage resources at the location where data is produced. This ideally puts compute and storage at the same point as the data source at the network edge.
Cloud. Cloud computing is a huge, highly scalable deployment of compute and storage resources at one of several distributed global locations (regions). Cloud providers also incorporate an assortment of pre-packaged services for IoT operations, making the cloud a preferred centralized platform for IoT deployments.
Edge computing architecture
Unlike cloud computing, edge computing allows data to exist closer to the data sources through a network of edge devices.
Fog. But the choice of compute and storage deployment isn't limited to the cloud or the edge.
Fog computing environments can produce bewildering amounts of sensor or IoT data generated across expansive physical areas that are just too large to define an edge. Examples include smart buildings, smart cities or even smart utility grids
Note: It's important to repeat that fog computing and edge computing share an almost identical definition and architecture, and the terms are sometimes used interchangeably even among technology experts.
Why is edge computing important?
Computing tasks demand suitable architectures, and the architecture that suits one type of computing task doesn't necessarily fit all types of computing tasks. Edge computing has emerged as a viable and important architecture that supports distributed computing to deploy compute and storage resources closer to -- ideally in the same physical location a
But decentralization can be challenging, demanding high levels of monitoring and control that are easily overlooked when moving away from a traditional centralized computing model. Edge computing has become relevant because it offers an effective solution to emerging network problems associated with moving enormous volumes of data that today's organizations produce and consume. It's not just a problem of amount. It's also a matter of time; applications depend on processing and responses that are increasingly time-sensitive.
Consider the rise of self-driving cars. They will depend on intelligent traffic control signals. Cars and traffic controls will need to produce, analyze and exchange data in real time. Multiply this requirement by huge numbers of autonomous vehicles, and the scope of the potential problems becomes clearer.
Bandwidth. Bandwidth is the amount of data which a network can carry over time, usually expressed in bits per second. All networks have a limited bandwidth, and the limits are more severe for wireless communication.
Latency. Latency is the time needed to send data between two points on a networkIn other cases, network outages can exacerbate congestion and even sever communication to some internet users entirely - making the internet of things useless during outages.
By deploying servers and storage where the data is generated, edge computing can operate many devices over a much smaller and more efficient LAN where ample bandwidth is used exclusively by local data-generating devices, making latency and congestion virtually nonexistent.
Edge computing use cases and examples
In principal, edge computing techniques are used to collect, filter, process and analyze data "in-place" at or near the network edge. It's a powerful means of using data that can't be first moved to a centralized location -- usually because the sheer volume of data makes such moves cost-prohibitive, technologically impractical or might otherwise violate compliance obligations, such as data sovereignty. This definition has spawned myriad real-world examples and use cases:
Manufacturing. An industrial manufacturer deployed edge computing to monitor manufacturing, enabling real-time analytics and machine learning at the edge to find production errors and improve product manufacturing quality.
Farming. Consider a business that grows crops indoors without sunlight, soil or pesticides. The process reduces grow times by more than 60%. Using sensors enables the business to track water use, nutrient density and determine optimal harvest. Data is collected and analyzed to find the effects of environmental factors and continually improve the crop growing algorithms and ensure that crops are harvested in peak condition.
Network optimization. Edge computing can help optimize network performance by measuring performance for users across the internet and then employing analytics to determine the most reliable, low-latency network path for each user's traffic. In effect, edge computing is used to "steer" traffic across the network for optimal time-sensitive traffic performance.
Workplace safety. Edge computing can combine and analyze data from on-site cameras, employee safety devices and various other sensors to help businesses oversee workplace conditions or ensure that employees follow established safety protocols -- especially when the workplace is remote or unusually dangerous, such as construction sites or oil rigs. | [question]
Consider the increasing use of edge computing across a range of sectors, including network optimization, agriculture, and manufacturing. In particular how, how can edge computing handle constraints on bandwidth, latency, and data sovereignty that come with typical centralized data centers? Talk about how edge computing is especially well-suited to real-time, data-intensive applications like worker safety is hazardous or remote environments and autonomous cars. What are the main elements that, in these situations, make computing essential?
=====================
[text]
Data is the lifeblood of modern business, providing valuable business insight and supporting real-time control over critical business processes and operations. Today's businesses are awash in an ocean of data, and huge amounts of data can be routinely collected from sensors and IoT devices operating in real time from remote locations and inhospitable operating environments almost anywhere in the world.
But this virtual flood of data is also changing the way businesses handle computing. The traditional computing paradigm built on a centralized data center and everyday internet isn't well suited to moving endlessly growing rivers of real-world data. Bandwidth limitations, latency issues and unpredictable network disruptions can all conspire to impair such efforts. Businesses are responding to these data challenges through the use of edge computing architecture.
In simplest terms, edge computing moves some portion of storage and compute resources out of the central data center and closer to the source of the data itself. Rather than transmitting raw data to a central data center for processing and analysis, that work is instead performed where the data is actually generated -
Thus, edge computing is reshaping IT and business computing. Take a comprehensive look at what edge computing is, how it works, the influence of the cloud, edge use cases, tradeoffs and implementation considerations.
Edge computing uses
Edge computing brings data processing closer to the data source.
How does edge computing work?
Edge computing is all a matter of location. In traditional enterprise computing, data is produced at a client endpoint, such as a user's computer.
But the number of devices connected to the internet, and the volume of data being produced by those devices and used by businesses, is growing far too quickly for traditional data center infrastructures to accommodate.
So IT architects have shifted focus from the central data center to the logical edge of the infrastructure -- taking storage and computing resources from the data center and moving those resources to the point where the data is generated.
Edge computing adoption
Although only 27% of respondents have already implemented edge computing technologies, 54% find the idea interesting.
Edge computing puts storage and servers where the data is, often requiring little more than a partial rack of gear to operate on the remote LAN to collect and process the data locally.
The idea of business intelligence can vary dramatically. Some examples include retail environments where video surveillance of the showroom floor might be combined with actual sales data to determine the most desirable product configuration or consumer demand.
Edge vs. cloud vs. fog computing
Edge computing is closely associated with the concepts of cloud computing and fog computing. Although there is some overlap between these concepts, they aren't the same thing, and generally shouldn't be used interchangeably. It's helpful to compare the concepts and understand their differences.
One of the easiest ways to understand the differences between edge, cloud and fog computing is to highlight their common theme: All three concepts relate to distributed computing and focus on the physical deployment of compute and storage resources in relation to the data that is being produced. The difference is a matter of where those resources are located.
Edge computing vs. cloud
Compare edge cloud, cloud computing and edge computing to determine which model is best for you.
Edge. Edge computing is the deployment of computing and storage resources at the location where data is produced. This ideally puts compute and storage at the same point as the data source at the network edge.
Cloud. Cloud computing is a huge, highly scalable deployment of compute and storage resources at one of several distributed global locations (regions). Cloud providers also incorporate an assortment of pre-packaged services for IoT operations, making the cloud a preferred centralized platform for IoT deployments.
Edge computing architecture
Unlike cloud computing, edge computing allows data to exist closer to the data sources through a network of edge devices.
Fog. But the choice of compute and storage deployment isn't limited to the cloud or the edge.
Fog computing environments can produce bewildering amounts of sensor or IoT data generated across expansive physical areas that are just too large to define an edge. Examples include smart buildings, smart cities or even smart utility grids
Note: It's important to repeat that fog computing and edge computing share an almost identical definition and architecture, and the terms are sometimes used interchangeably even among technology experts.
Why is edge computing important?
Computing tasks demand suitable architectures, and the architecture that suits one type of computing task doesn't necessarily fit all types of computing tasks. Edge computing has emerged as a viable and important architecture that supports distributed computing to deploy compute and storage resources closer to -- ideally in the same physical location a
But decentralization can be challenging, demanding high levels of monitoring and control that are easily overlooked when moving away from a traditional centralized computing model. Edge computing has become relevant because it offers an effective solution to emerging network problems associated with moving enormous volumes of data that today's organizations produce and consume. It's not just a problem of amount. It's also a matter of time; applications depend on processing and responses that are increasingly time-sensitive.
Consider the rise of self-driving cars. They will depend on intelligent traffic control signals. Cars and traffic controls will need to produce, analyze and exchange data in real time. Multiply this requirement by huge numbers of autonomous vehicles, and the scope of the potential problems becomes clearer.
Bandwidth. Bandwidth is the amount of data which a network can carry over time, usually expressed in bits per second. All networks have a limited bandwidth, and the limits are more severe for wireless communication.
Latency. Latency is the time needed to send data between two points on a networkIn other cases, network outages can exacerbate congestion and even sever communication to some internet users entirely - making the internet of things useless during outages.
By deploying servers and storage where the data is generated, edge computing can operate many devices over a much smaller and more efficient LAN where ample bandwidth is used exclusively by local data-generating devices, making latency and congestion virtually nonexistent.
Edge computing use cases and examples
In principal, edge computing techniques are used to collect, filter, process and analyze data "in-place" at or near the network edge. It's a powerful means of using data that can't be first moved to a centralized location -- usually because the sheer volume of data makes such moves cost-prohibitive, technologically impractical or might otherwise violate compliance obligations, such as data sovereignty. This definition has spawned myriad real-world examples and use cases:
Manufacturing. An industrial manufacturer deployed edge computing to monitor manufacturing, enabling real-time analytics and machine learning at the edge to find production errors and improve product manufacturing quality.
Farming. Consider a business that grows crops indoors without sunlight, soil or pesticides. The process reduces grow times by more than 60%. Using sensors enables the business to track water use, nutrient density and determine optimal harvest. Data is collected and analyzed to find the effects of environmental factors and continually improve the crop growing algorithms and ensure that crops are harvested in peak condition.
Network optimization. Edge computing can help optimize network performance by measuring performance for users across the internet and then employing analytics to determine the most reliable, low-latency network path for each user's traffic. In effect, edge computing is used to "steer" traffic across the network for optimal time-sensitive traffic performance.
Workplace safety. Edge computing can combine and analyze data from on-site cameras, employee safety devices and various other sensors to help businesses oversee workplace conditions or ensure that employees follow established safety protocols -- especially when the workplace is remote or unusually dangerous, such as construction sites or oil rigs.
https://www.techtarget.com/searchdatacenter/definition/edge-computing
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
only use information from the provided context. answer in bullet points. keep it short, two sentences for each bullet point. | summarize the complaints related to the five technologies that apple allegedly suppressed. | The DOJ’s Complaint
The DOJ’s complaint alleges that Apple has monopolized markets for “performance smartphones” and
smartphones generally by impeding the development of technologies that threaten to undermine the
iPhone platform. Some of the DOJ’s allegations involve Apple’s control of iPhone app distribution.
Specifically, the complaint asserts that Apple wields its power over app approval to block or marginalize
technologies that may reduce consumers’ dependence on the iPhone. The DOJ also contends that Apple
maintains its monopoly by denying application programming interfaces (APIs) and other access points to
third-party services that would reduce the costs of switching from an iPhone to another smartphone.
The complaint highlights five technologies that Apple has allegedly suppressed using the tactics described
above. These allegations are summarized below.
• Super Apps. The DOJ argues that Apple has thwarted the development of “super apps”—
programs that can serve as platforms for other apps. Super apps are popular in Asian
markets, offering users a suite of services like payments, messaging, and e-commerce
within a single app. The DOJ claims that, by offering a range of services that can be
accessed on different types of devices, super apps threaten to disintermediate the iPhone
and commoditize device hardware. The DOJ alleges that Apple strategically changed its
App Store Guidelines to suppress this threat, effectively preventing apps from hosting the
types of “mini programs” offered by super apps. In particular, the complaint asserts that
Apple imposes restrictions that make it difficult for users to find mini programs. The DOJ
also contends that Apple prevents mini programs from accessing APIs needed to
Congressional Research Service
https://crsreports.congress.gov
LSB11154
Congressional Research Service 2
implement Apple’s in-app payment system, functionally precluding developers from
monetizing such programs.
• Cloud Streaming Apps. The complaint claims that Apple has also suppressed the
development of cloud streaming apps, which allow users to run computationally intensive
programs without storing the programs on their smartphones. By leveraging the
computing power of remote servers, cloud streaming apps facilitate complex programs
like gaming and artificial intelligence services, even if consumers purchase smartphones
with less sophisticated hardware than an iPhone. The DOJ says that Apple is highly
attuned to this threat, quoting an executive’s concern that consumers might buy an
Android device “for 25 bux at a garage sale and . . . have a solid cloud computing device”
that “works fine.”
Apple has allegedly taken several steps to avert this outcome. The complaint contends
that Apple requires developers to submit any cloud streaming game or update as a
standalone app for approval by Apple. Because advanced games often require daily or
hourly updates, the DOJ claims that this requirement presents developers with the
untenable choice of delaying software updates for non-iOS versions of their games or
making the iOS versions incompatible with non-iOS versions. The lawsuit also alleges
that Apple undermines cloud gaming apps in other ways—for example, by requiring
“game overhauls and payment redesigns” that effectively force developers to create
iOS-specific versions of their games instead of a single cross-platform version. As a
result of Apple’s conduct, the DOJ says, no developer has designed a cloud streaming app
for the iPhone.
• Messaging. The complaint alleges that Apple degrades cross-platform messaging in
several ways, which discourages the use of other smartphones. For example, the DOJ
claims that Apple prevents third-party messaging apps from accessing APIs that allow for
the combination of “text to anyone” functionality and the advanced features of “over the
top” (OTT) messaging protocols (e.g., encryption, typing indicators, read receipts, and
the ability to share rich media). As a result, use of a third-party messaging app requires
both the sender and recipient of a message to download the same third-party app. Apple
Messages, by contrast, incorporates “text to anyone” functionality and advanced OTT
features, allowing users to send messages with such features by typing a phone number in
the messaging app’s “To:” field.
The DOJ also alleges that Apple undermines the messaging quality of rival smartphones:
if an iPhone user messages a non-iPhone user via Apple Messages, the text appears in a
green bubble and offers limited functionality. Specifically, these conversations are not
encrypted, videos are lower quality, and users cannot edit messages or see typing
indicators. The complaint claims that Apple takes steps to preserve these disadvantages
for competing smartphones—for example, by refusing to make Apple Messages available
to other smartphones and blocking developers from providing end-to-end encryption for
texts from Apple Messages to Android users.
• Smartwatches. The DOJ also alleges that Apple has suppressed the development of
cross-platform smartwatches, steering consumers to Apple’s smartwatch and thereby
locking them into the iPhone ecosystem. The complaint contends that Apple degrades the
functionality of third-party smartwatches by preventing them from responding to iPhone
notifications, inhibiting them from maintaining reliable connections with iPhones, and
undermining the performance of third-party smartwatches that connect directly with a
cellular network. In doing so, the DOJ says, Apple bolsters its own smartwatch—Apple
Watch—which does not face these disadvantages. Because Apple Watch is not
Congressional Research Service 3
compatible with other smartphones, purchases of Apple Watch raise the costs of
switching from an iPhone to another smartphone. Thus, by favoring Apple Watch and
degrading rival smartwatches, the DOJ claims, Apple helps solidify its smartphone
monopoly.
• Digital Wallets. The DOJ argues that Apple has implemented a similar strategy vis-á-vis
digital wallets, which allow for the storage and use of passes and credentials such as
credit cards, movie tickets, and car keys. The complaint alleges that Apple’s digital
wallet—Apple Wallet—is the only iPhone app that is allowed to access the technology
needed for tap-to-pay functionality, which the DOJ characterizes as the “most important
function for attracting users to a digital wallet.” The DOJ also claims that Apple prevents
rival digital wallets from authenticating digital payment options on online checkout pages
and from serving as alternatives to Apple’s in-app payment tool, further reducing the
attractiveness of rival wallets to consumers. By stifling the emergence of cross-platform
wallets, the lawsuit contends, Apple has suppressed the development of technology that
could reduce the costs of switching from an iPhone to another smartphone.
The complaint asserts that Apple’s conduct amounts to monopolization or, in the alternative, attempted
monopolization of two markets: the U.S. market for “performance smartphones” and a broader U.S.
market for all smartphones. The DOJ argues that “performance smartphones” represent a distinct market
because “entry-level smartphones” made with lower-quality materials and performance components are
not reasonable substitutes for “higher-end” smartphones like the iPhone.
In support of its allegations of monopoly power, the DOJ contends that Apple occupies more than 70% of
the market for “performance smartphones” and more than 65% of the market for smartphones generally,
benefits from substantial barriers to entry and expansion, foregoes innovation without fear of losing
customers, and achieves profit margins that significantly exceed those of rivals.
In alleging anticompetitive effects, the DOJ claims that the conduct described above results in less choice
for smartphone users, harms the quality of the iPhone and rival smartphones, and allows Apple to extract
higher profits from iPhone users and app developers. The complaint rejects the notion that these harms
can be justified on the basis of privacy, security, or other procompetitive benefits. Here, the DOJ argues
that many of the technologies Apple suppresses—for example, apps that would allow Apple Messages to
send encrypted texts to Android devices—would themselves enhance privacy and security. The lawsuit
contends that Apple’s selective invocation of privacy and security underscores the pretextual nature of
those defenses. | only use information from the provided context. answer in bullet points. keep it short, two sentences for each bullet point.
summarize the complaints related to the five technologies that apple allegedly suppressed.
The DOJ’s Complaint
The DOJ’s complaint alleges that Apple has monopolized markets for “performance smartphones” and
smartphones generally by impeding the development of technologies that threaten to undermine the
iPhone platform. Some of the DOJ’s allegations involve Apple’s control of iPhone app distribution.
Specifically, the complaint asserts that Apple wields its power over app approval to block or marginalize
technologies that may reduce consumers’ dependence on the iPhone. The DOJ also contends that Apple
maintains its monopoly by denying application programming interfaces (APIs) and other access points to
third-party services that would reduce the costs of switching from an iPhone to another smartphone.
The complaint highlights five technologies that Apple has allegedly suppressed using the tactics described
above. These allegations are summarized below.
• Super Apps. The DOJ argues that Apple has thwarted the development of “super apps”—
programs that can serve as platforms for other apps. Super apps are popular in Asian
markets, offering users a suite of services like payments, messaging, and e-commerce
within a single app. The DOJ claims that, by offering a range of services that can be
accessed on different types of devices, super apps threaten to disintermediate the iPhone
and commoditize device hardware. The DOJ alleges that Apple strategically changed its
App Store Guidelines to suppress this threat, effectively preventing apps from hosting the
types of “mini programs” offered by super apps. In particular, the complaint asserts that
Apple imposes restrictions that make it difficult for users to find mini programs. The DOJ
also contends that Apple prevents mini programs from accessing APIs needed to
Congressional Research Service
https://crsreports.congress.gov
LSB11154
Congressional Research Service 2
implement Apple’s in-app payment system, functionally precluding developers from
monetizing such programs.
• Cloud Streaming Apps. The complaint claims that Apple has also suppressed the
development of cloud streaming apps, which allow users to run computationally intensive
programs without storing the programs on their smartphones. By leveraging the
computing power of remote servers, cloud streaming apps facilitate complex programs
like gaming and artificial intelligence services, even if consumers purchase smartphones
with less sophisticated hardware than an iPhone. The DOJ says that Apple is highly
attuned to this threat, quoting an executive’s concern that consumers might buy an
Android device “for 25 bux at a garage sale and . . . have a solid cloud computing device”
that “works fine.”
Apple has allegedly taken several steps to avert this outcome. The complaint contends
that Apple requires developers to submit any cloud streaming game or update as a
standalone app for approval by Apple. Because advanced games often require daily or
hourly updates, the DOJ claims that this requirement presents developers with the
untenable choice of delaying software updates for non-iOS versions of their games or
making the iOS versions incompatible with non-iOS versions. The lawsuit also alleges
that Apple undermines cloud gaming apps in other ways—for example, by requiring
“game overhauls and payment redesigns” that effectively force developers to create
iOS-specific versions of their games instead of a single cross-platform version. As a
result of Apple’s conduct, the DOJ says, no developer has designed a cloud streaming app
for the iPhone.
• Messaging. The complaint alleges that Apple degrades cross-platform messaging in
several ways, which discourages the use of other smartphones. For example, the DOJ
claims that Apple prevents third-party messaging apps from accessing APIs that allow for
the combination of “text to anyone” functionality and the advanced features of “over the
top” (OTT) messaging protocols (e.g., encryption, typing indicators, read receipts, and
the ability to share rich media). As a result, use of a third-party messaging app requires
both the sender and recipient of a message to download the same third-party app. Apple
Messages, by contrast, incorporates “text to anyone” functionality and advanced OTT
features, allowing users to send messages with such features by typing a phone number in
the messaging app’s “To:” field.
The DOJ also alleges that Apple undermines the messaging quality of rival smartphones:
if an iPhone user messages a non-iPhone user via Apple Messages, the text appears in a
green bubble and offers limited functionality. Specifically, these conversations are not
encrypted, videos are lower quality, and users cannot edit messages or see typing
indicators. The complaint claims that Apple takes steps to preserve these disadvantages
for competing smartphones—for example, by refusing to make Apple Messages available
to other smartphones and blocking developers from providing end-to-end encryption for
texts from Apple Messages to Android users.
• Smartwatches. The DOJ also alleges that Apple has suppressed the development of
cross-platform smartwatches, steering consumers to Apple’s smartwatch and thereby
locking them into the iPhone ecosystem. The complaint contends that Apple degrades the
functionality of third-party smartwatches by preventing them from responding to iPhone
notifications, inhibiting them from maintaining reliable connections with iPhones, and
undermining the performance of third-party smartwatches that connect directly with a
cellular network. In doing so, the DOJ says, Apple bolsters its own smartwatch—Apple
Watch—which does not face these disadvantages. Because Apple Watch is not
Congressional Research Service 3
compatible with other smartphones, purchases of Apple Watch raise the costs of
switching from an iPhone to another smartphone. Thus, by favoring Apple Watch and
degrading rival smartwatches, the DOJ claims, Apple helps solidify its smartphone
monopoly.
• Digital Wallets. The DOJ argues that Apple has implemented a similar strategy vis-á-vis
digital wallets, which allow for the storage and use of passes and credentials such as
credit cards, movie tickets, and car keys. The complaint alleges that Apple’s digital
wallet—Apple Wallet—is the only iPhone app that is allowed to access the technology
needed for tap-to-pay functionality, which the DOJ characterizes as the “most important
function for attracting users to a digital wallet.” The DOJ also claims that Apple prevents
rival digital wallets from authenticating digital payment options on online checkout pages
and from serving as alternatives to Apple’s in-app payment tool, further reducing the
attractiveness of rival wallets to consumers. By stifling the emergence of cross-platform
wallets, the lawsuit contends, Apple has suppressed the development of technology that
could reduce the costs of switching from an iPhone to another smartphone.
The complaint asserts that Apple’s conduct amounts to monopolization or, in the alternative, attempted
monopolization of two markets: the U.S. market for “performance smartphones” and a broader U.S.
market for all smartphones. The DOJ argues that “performance smartphones” represent a distinct market
because “entry-level smartphones” made with lower-quality materials and performance components are
not reasonable substitutes for “higher-end” smartphones like the iPhone.
In support of its allegations of monopoly power, the DOJ contends that Apple occupies more than 70% of
the market for “performance smartphones” and more than 65% of the market for smartphones generally,
benefits from substantial barriers to entry and expansion, foregoes innovation without fear of losing
customers, and achieves profit margins that significantly exceed those of rivals.
In alleging anticompetitive effects, the DOJ claims that the conduct described above results in less choice
for smartphone users, harms the quality of the iPhone and rival smartphones, and allows Apple to extract
higher profits from iPhone users and app developers. The complaint rejects the notion that these harms
can be justified on the basis of privacy, security, or other procompetitive benefits. Here, the DOJ argues
that many of the technologies Apple suppresses—for example, apps that would allow Apple Messages to
send encrypted texts to Android devices—would themselves enhance privacy and security. The lawsuit
contends that Apple’s selective invocation of privacy and security underscores the pretextual nature of
those defenses. |
Use only the information provided in this prompt and context for your answer. Do not use any outside information, and if you cannot answer from the provided context, please state, "I cannot provide an answer due to lack of context." Also, please break down your answer into bullet points with an explanation of each point. | According to the following text, what is the significance of genetics when it comes to Granulomatosis with polyangiitis (GPA)? | Granulomatosis with polyangiitis
Description
Granulomatosis with polyangiitis (GPA) is a condition that causes inflammation that
primarily affects the respiratory tract (including the lungs and airways) and the kidneys.
This disorder is formerly known as Wegener granulomatosis. A characteristic feature of
GPA is inflammation of blood vessels (vasculitis), particularly the small- and mediumsized blood vessels in the lungs, nose, sinuses, windpipe, and kidneys, although
vessels in any organ can be involved. Polyangiitis refers to the inflammation of multiple
types of vessels, such as small arteries and veins. Vasculitis causes scarring and tissue
death in the vessels and impedes blood flow to tissues and organs.
Another characteristic feature of GPA is the formation of granulomas, which are small
areas of inflammation composed of immune cells that aid in the inflammatory reaction.
The granulomas usually occur in the lungs or airways of people with this condition,
although they can occur in the eyes or other organs. As granulomas grow, they can
invade surrounding areas, causing tissue damage.
The signs and symptoms of GPA vary based on the tissues and organs affected by
vasculitis. Many people with this condition experience a vague feeling of discomfort (
malaise), fever, weight loss, or other general symptoms of the body's immune reaction.
In most people with GPA, inflammation begins in the vessels of the respiratory tract,
leading to nasal congestion, frequent nosebleeds, shortness of breath, or coughing.
Severe inflammation in the nose can lead to a hole in the tissue that separates the two
nostrils (nasal septum perforation) or a collapse of the septum, causing a sunken bridge
of the nose (saddle nose).
The kidneys are commonly affected in people with GPA. Tissue damage caused by
vasculitis in the kidneys can lead to decreased kidney function, which may cause
increased blood pressure or blood in the urine, and life-threatening kidney failure.
Inflammation can also occur in other regions of the body, including the eyes, middle and
inner ear structures, skin, joints, nerves, heart, and brain. Depending on which systems
are involved, additional symptoms can include skin rashes, inner ear pain, swollen and
painful joints, and numbness or tingling in the limbs.
GPA is most common in middle-aged adults, although it can occur at any age. If
untreated, the condition is usually fatal within 2 years of diagnosis. Even after treatment,
vasculitis can return.
Frequency
GPA is a rare disorder that affects an estimated 3 in 100,000 people in the United
States.
Causes
The genetic basis of GPA is not well understood. Having a particular version of the HLADPB1 gene is the strongest genetic risk factor for developing this condition, although
several other genes, some of which have not been identified, may be involved. It is
likely that a combination of genetic and environmental factors lead to GPA.
GPA is an autoimmune disorder. Such disorders occur when the immune system
malfunctions and attacks the body's own tissues and organs. Approximately 90 percent
of people with GPA have an abnormal immune protein called an anti-neutrophil
cytoplasmic antibody (ANCA) in their blood. Antibodies normally bind to specific foreign
particles and germs, marking them for destruction, but ANCAs attack normal human
proteins. Most people with GPA have an ANCA that attacks the human protein
proteinase 3 (PR3). A few affected individuals have an ANCA that attacks a protein
called myeloperoxidase (MPO). When these antibodies attach to the protein they
recognize, they trigger inflammation, which contributes to the signs and symptoms of
GPA.
The HLA-DPB1 gene belongs to a family of genes called the human leukocyte antigen (
HLA) complex. The HLA complex helps the immune system distinguish the body's own
proteins from proteins made by foreign invaders (such as viruses and bacteria). Each
HLA gene has many different normal variations, allowing each person's immune system
to react to a wide range of foreign proteins. A particular variant of the HLA-DPB1 gene
called HLA-DPB1*0401 has been found more frequently in people with GPA, especially
those with ANCAs, than in people without the condition.
Because the HLA-DPB1 gene is involved in the immune system, changes in it might be
related to the autoimmune response and inflammation in the respiratory tract and
kidneys characteristic of GPA. However, it is unclear what specific role the HLA-DPB1*
0401 gene variant plays in development of this condition.
Learn more about the gene associated with Granulomatosis with polyangiitis
• HLA-DPB1
Inheritance
The inheritance pattern of GPA is unknown. Most instances are sporadic and occur in
individuals with no history of the disorder in their family. Only rarely is more than one
member of the same family affected by the disorder. | Use only the information provided in this prompt and context for your answer. Do not use any outside information, and if you cannot answer from the provided context, please state, "I cannot provide an answer due to lack of context." Also, please break down your answer into bullet points with an explanation of each point.
According to the following text, what is the significance of genetics when it comes to Granulomatosis with polyangiitis (GPA)?
Description
Granulomatosis with polyangiitis (GPA) is a condition that causes inflammation that
primarily affects the respiratory tract (including the lungs and airways) and the kidneys.
This disorder is formerly known as Wegener granulomatosis. A characteristic feature of
GPA is inflammation of blood vessels (vasculitis), particularly the small- and mediumsized blood vessels in the lungs, nose, sinuses, windpipe, and kidneys, although
vessels in any organ can be involved. Polyangiitis refers to the inflammation of multiple
types of vessels, such as small arteries and veins. Vasculitis causes scarring and tissue
death in the vessels and impedes blood flow to tissues and organs.
Another characteristic feature of GPA is the formation of granulomas, which are small
areas of inflammation composed of immune cells that aid in the inflammatory reaction.
The granulomas usually occur in the lungs or airways of people with this condition,
although they can occur in the eyes or other organs. As granulomas grow, they can
invade surrounding areas, causing tissue damage.
The signs and symptoms of GPA vary based on the tissues and organs affected by
vasculitis. Many people with this condition experience a vague feeling of discomfort (
malaise), fever, weight loss, or other general symptoms of the body's immune reaction.
In most people with GPA, inflammation begins in the vessels of the respiratory tract,
leading to nasal congestion, frequent nosebleeds, shortness of breath, or coughing.
Severe inflammation in the nose can lead to a hole in the tissue that separates the two
nostrils (nasal septum perforation) or a collapse of the septum, causing a sunken bridge
of the nose (saddle nose).
The kidneys are commonly affected in people with GPA. Tissue damage caused by
vasculitis in the kidneys can lead to decreased kidney function, which may cause
increased blood pressure or blood in the urine, and life-threatening kidney failure.
Inflammation can also occur in other regions of the body, including the eyes, middle and
inner ear structures, skin, joints, nerves, heart, and brain. Depending on which systems
are involved, additional symptoms can include skin rashes, inner ear pain, swollen and
painful joints, and numbness or tingling in the limbs.
GPA is most common in middle-aged adults, although it can occur at any age. If
untreated, the condition is usually fatal within 2 years of diagnosis. Even after treatment,
vasculitis can return.
Frequency
GPA is a rare disorder that affects an estimated 3 in 100,000 people in the United
States.
Causes
The genetic basis of GPA is not well understood. Having a particular version of the HLADPB1 gene is the strongest genetic risk factor for developing this condition, although
several other genes, some of which have not been identified, may be involved. It is
likely that a combination of genetic and environmental factors lead to GPA.
GPA is an autoimmune disorder. Such disorders occur when the immune system
malfunctions and attacks the body's own tissues and organs. Approximately 90 percent
of people with GPA have an abnormal immune protein called an anti-neutrophil
cytoplasmic antibody (ANCA) in their blood. Antibodies normally bind to specific foreign
particles and germs, marking them for destruction, but ANCAs attack normal human
proteins. Most people with GPA have an ANCA that attacks the human protein
proteinase 3 (PR3). A few affected individuals have an ANCA that attacks a protein
called myeloperoxidase (MPO). When these antibodies attach to the protein they
recognize, they trigger inflammation, which contributes to the signs and symptoms of
GPA.
The HLA-DPB1 gene belongs to a family of genes called the human leukocyte antigen (
HLA) complex. The HLA complex helps the immune system distinguish the body's own
proteins from proteins made by foreign invaders (such as viruses and bacteria). Each
HLA gene has many different normal variations, allowing each person's immune system
to react to a wide range of foreign proteins. A particular variant of the HLA-DPB1 gene
called HLA-DPB1*0401 has been found more frequently in people with GPA, especially
those with ANCAs, than in people without the condition.
Because the HLA-DPB1 gene is involved in the immune system, changes in it might be
related to the autoimmune response and inflammation in the respiratory tract and
kidneys characteristic of GPA. However, it is unclear what specific role the HLA-DPB1*
0401 gene variant plays in development of this condition.
Learn more about the gene associated with Granulomatosis with polyangiitis
• HLA-DPB1
Inheritance
The inheritance pattern of GPA is unknown. Most instances are sporadic and occur in
individuals with no history of the disorder in their family. Only rarely is more than one
member of the same family affected by the disorder. |
Response should not be more than 100 words.
Model must only respond using information contained in the context block.
Model should not rely on its own knowledge or outside sources of information when responding. | What medications should be prescribed first for adults diagnosed with heart failure with reduced ejection fraction according to the NICE guidelines? | Chronic heart failure in adults: diagnosis and management NICE guideline Published: 12 September 2018 www.nice.org.uk/guidance/ng106 © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Your responsibility The recommendations in this guideline represent the view of NICE, arrived at after careful consideration of the evidence available. When exercising their judgement, professionals and practitioners are expected to take this guideline fully into account, alongside the individual needs, preferences and values of their patients or the people using their service. It is not mandatory to apply the recommendations, and the guideline does not override the responsibility to make decisions appropriate to the circumstances of the individual, in consultation with them and their families and carers or guardian. All problems (adverse events) related to a medicine or medical device used for treatment or in a procedure should be reported to the Medicines and Healthcare products Regulatory Agency using the Yellow Card Scheme. Local commissioners and providers of healthcare have a responsibility to enable the guideline to be applied when individual professionals and people using services wish to use it. They should do so in the context of local and national priorities for funding and developing services, and in light of their duties to have due regard to the need to eliminate unlawful discrimination, to advance equality of opportunity and to reduce health inequalities. Nothing in this guideline should be interpreted in a way that would be inconsistent with complying with those duties. Commissioners and providers have a responsibility to promote an environmentally sustainable health and care system and should assess and reduce the environmental impact of implementing NICE recommendations wherever possible. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 2 of 35 Contents Overview ...................................................................................................................................... 5 Who is it for? .......................................................................................................................................... 5 Recommendations ....................................................................................................................... 6 1.1 Team working in the management of heart failure ....................................................................... 6 1.2 Diagnosing heart failure .................................................................................................................. 9 1.3 Giving information to people with heart failure ............................................................................ 12 1.4 Treating heart failure with reduced ejection fraction .................................................................. 12 1.5 Treating heart failure with reduced ejection fraction in people with chronic kidney disease . 17 1.6 Managing all types of heart failure ................................................................................................ 18 1.7 Monitoring treatment for all types of heart failure ....................................................................... 21 1.8 Interventional procedures ............................................................................................................... 22 1.9 Cardiac rehabilitation ...................................................................................................................... 23 1.10 Palliative care ................................................................................................................................. 24 Terms used in this guideline ................................................................................................................. 24 Putting this guideline into practice ............................................................................................ 26 Recommendations for research ................................................................................................. 28 1 Diuretic therapy for managing fluid overload in people with advanced heart failure in the community ............................................................................................................................................. 28 2 Cardiac MRI versus other imaging techniques for diagnosing heart failure ................................ 28 3 The impact of atrial fibrillation on the natriuretic peptide threshold for diagnosing heart failure ...................................................................................................................................................... 29 4 The impact of advanced kidney disease on the natriuretic peptide threshold for diagnosing heart failure ............................................................................................................................................ 30 5 Risk tools for predicting non-sudden death in heart failure .......................................................... 30 Context ......................................................................................................................................... 31 Key facts and figures ............................................................................................................................ 31 Current practice .................................................................................................................................... 31 Finding more information and committee details .....................................................................32 Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 3 of 35 Update information .....................................................................................................................33 Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 4 of 35 This guideline replaces CG108. This guideline is the basis of QS167, QS9 and QS181. Overview This guideline covers diagnosing and managing chronic heart failure in people aged 18 and over. It aims to improve diagnosis and treatment to increase the length and quality of life for people with heart failure. NICE has also produced a guideline on acute heart failure. Who is it for? • Healthcare professionals • People with heart failure and their families and carers Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 5 of 35 Recommendations People have the right to be involved in discussions and make informed decisions about their care, as described in NICE's information on making decisions about your care. Making decisions using NICE guidelines explains how we use words to show the strength (or certainty) of our recommendations, and has information about prescribing medicines (including off-label use), professional guidelines, standards and laws (including on consent and mental capacity), and safeguarding. 1.1 Team working in the management of heart failure 1.1.1 The core specialist heart failure multidisciplinary team (MDT) should work in collaboration with the primary care team, and should include: • a lead physician with subspecialty training in heart failure (usually a consultant cardiologist) who is responsible for making the clinical diagnosis • a specialist heart failure nurse • a healthcare professional with expertise in specialist prescribing for heart failure. [2018] 1.1.2 The specialist heart failure MDT should: • diagnose heart failure • give information to people newly diagnosed with heart failure (see the section on giving information to people with heart failure) • manage newly diagnosed, recently decompensated or advanced heart failure (NYHA [New York Heart Association] class III to IV) Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 6 of 35 • optimise treatment • start new medicines that need specialist supervision • continue to manage heart failure after an interventional procedure such as implantation of a cardioverter defibrillator or cardiac resynchronisation device • manage heart failure that is not responding to treatment. [2018] 1.1.3 The specialist heart failure MDT should directly involve, or refer people to, other services, including rehabilitation, services for older people and palliative care services, as needed. [2018] 1.1.4 The primary care team should carry out the following for people with heart failure at all times, including periods when the person is also receiving specialist heart failure care from the MDT: • ensure effective communication links between different care settings and clinical services involved in the person's care • lead a full review of the person's heart failure care, which may form part of a long-term conditions review • recall the person at least every 6 months and update the clinical record • ensure that changes to the clinical record are understood and agreed by the person with heart failure and shared with the specialist heart failure MDT • arrange access to specialist heart failure services if needed. [2018] Care after an acute event For recommendations on the diagnosis and management of acute heart failure, see the NICE guideline on acute heart failure. 1.1.5 People with heart failure should generally be discharged from hospital only when their clinical condition is stable and the management plan is optimised. Timing of discharge should take into account the wishes of the person and their family or Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 7 of 35 carer, and the level of care and support that can be provided in the community. [2003] 1.1.6 The primary care team should take over routine management of heart failure as soon as it has been stabilised and its management optimised. [2018] Writing a care plan 1.1.7 The specialist heart failure MDT should write a summary for each person with heart failure that includes: • diagnosis and aetiology • medicines prescribed, monitoring of medicines, when medicines should be reviewed and any support the person needs to take the medicines • functional abilities and any social care needs • social circumstances, including carers' needs. [2018] 1.1.8 The summary should form the basis of a care plan for each person, which should include: • plans for managing the person's heart failure, including follow-up care, rehabilitation and access to social care • symptoms to look out for in case of deterioration • a process for any subsequent access to the specialist heart failure MDT if needed • contact details for - a named healthcare coordinator (usually a specialist heart failure nurse) - alternative local heart failure specialist care providers, for urgent care or review. • additional sources of information for people with heart failure. [2018] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 8 of 35 1.1.9 Give a copy of the care plan to the person with heart failure, their family or carer if appropriate, and all health and social care professionals involved in their care. [2018] 1.2 Diagnosing heart failure Symptoms, signs and investigations 1.2.1 Take a careful and detailed history, and perform a clinical examination and tests to confirm the presence of heart failure. [2010] 1.2.2 Measure N-terminal pro-B-type natriuretic peptide (NT-proBNP) in people with suspected heart failure. [2018] 1.2.3 Because very high levels of NT-proBNP carry a poor prognosis, refer people with suspected heart failure and an NT-proBNP level above 2,000 ng/litre (236 pmol/ litre) urgently, to have specialist assessment and transthoracic echocardiography within 2 weeks. [2018] 1.2.4 Refer people with suspected heart failure and an NT-proBNP level between 400 and 2,000 ng/litre (47 to 236 pmol/litre) to have specialist assessment and transthoracic echocardiography within 6 weeks. [2018] 1.2.5 Be aware that: • an NT-proBNP level less than 400 ng/litre (47 pmol/litre) in an untreated person makes a diagnosis of heart failure less likely • the level of serum natriuretic peptide does not differentiate between heart failure with reduced ejection fraction and heart failure with preserved ejection fraction. [2018] 1.2.6 Review alternative causes for symptoms of heart failure in people with NTproBNP levels below 400 ng/litre. If there is still concern that the symptoms might be related to heart failure, discuss with a physician with subspeciality training in heart failure. [2018] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 9 of 35 1.2.7 Be aware that: • obesity, African or African–Caribbean family background, or treatment with diuretics, angiotensin-converting enzyme (ACE) inhibitors, beta-blockers, angiotensin II receptor blockers (ARBs) or mineralocorticoid receptor antagonists (MRAs) can reduce levels of serum natriuretic peptides • high levels of serum natriuretic peptides can have causes other than heart failure (for example, age over 70 years, left ventricular hypertrophy, ischaemia, tachycardia, right ventricular overload, hypoxaemia [including pulmonary embolism], renal dysfunction [eGFR less than 60 ml/minute/ 1.73 m 2 ], sepsis, chronic obstructive pulmonary disease, diabetes, or cirrhosis of the liver). [2010, amended 2018] 1.2.8 Perform transthoracic echocardiography to exclude important valve disease, assess the systolic (and diastolic) function of the (left) ventricle, and detect intracardiac shunts. See the section on referral for echocardiography and specialist assessment in the NICE guideline on heart valve disease. [2003, amended 2018] 1.2.9 Transthoracic echocardiography should be performed on high-resolution equipment by experienced operators trained to the relevant professional standards. Need and demand for these studies should not compromise quality. [2003, amended 2018] 1.2.10 Ensure that those reporting echocardiography are experienced in doing so. [2003] 1.2.11 Consider alternative methods of imaging the heart (for example, radionuclide angiography [multigated acquisition scanning], cardiac MRI or transoesophageal echocardiography) if a poor image is produced by transthoracic echocardiography. [2003, amended 2018] 1.2.12 Perform an ECG and consider the following tests to evaluate possible aggravating factors and/or alternative diagnoses: • chest X-ray • blood tests: Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 10 of 35 - renal function profile - thyroid function profile - liver function profile - lipid profile - glycosylated haemoglobin (HbA1c) - full blood count • urinalysis • peak flow or spirometry. [2010, amended 2018] 1.2.13 Try to exclude other disorders that may present in a similar manner. [2003] 1.2.14 When a diagnosis of heart failure has been made, assess severity, aetiology, precipitating factors, type of cardiac dysfunction and correctable causes. [2010] Heart failure caused by valve disease 1.2.15 Refer people with heart failure caused by valve disease for specialist assessment and advice regarding follow-up. See the section on referral for echocardiography and specialist assessment in the NICE guideline on heart valve disease. [2003] Reviewing existing diagnoses 1.2.16 Review the basis for a historical diagnosis of heart failure, and manage care in accordance with this guideline only if the diagnosis is confirmed. [2003] 1.2.17 If the diagnosis of heart failure is still suspected, but confirmation of the underlying cardiac abnormality has not occurred, then the person should have appropriate further investigation. [2003] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 11 of 35 1.3 Giving information to people with heart failure 1.3.1 When giving information to people with heart failure, follow the recommendations in the NICE guideline on patient experience in adult NHS services. [2018] 1.3.2 Discuss the person's prognosis in a sensitive, open and honest manner. Be frank about the uncertainty in predicting the course of their heart failure. Revisit this discussion as the person's condition evolves. [2018] 1.3.3 Provide information whenever needed throughout the person's care. [2018] 1.3.4 Consider training in advanced communication skills for all healthcare professionals working with people who have heart failure. [2018] First consultations for people newly diagnosed with heart failure 1.3.5 The specialist heart failure MDT should offer people newly diagnosed with heart failure an extended first consultation, followed by a second consultation to take place within 2 weeks if possible. At each consultation: • discuss the person's diagnosis and prognosis • explain heart failure terminology • discuss treatments • address the risk of sudden death, including any misconceptions about that risk • encourage the person and their family or carers to ask any questions they have. [2018] 1.4 Treating heart failure with reduced ejection fraction See the section on managing all types of heart failure for general recommendations on managing all types of heart failure. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 12 of 35 See NICE's technology appraisal guidance on dapagliflozin and empagliflozin for treating chronic heart failure with reduced ejection fraction. When managing pharmacological treatment, follow the recommendations in the NICE guidelines on medicines adherence and medicines optimisation. First-line treatment 1.4.1 Offer an angiotensin-converting enzyme (ACE) inhibitor and a beta-blocker licensed for heart failure to people who have heart failure with reduced ejection fraction. Use clinical judgement when deciding which drug to start first. [2010] ACE inhibitors 1.4.2 Do not offer ACE inhibitor therapy if there is a clinical suspicion of haemodynamically significant valve disease until the valve disease has been assessed by a specialist. See the section on referral for echocardiography and specialist assessment in the NICE guideline on heart valve disease. [2003] 1.4.3 Start ACE inhibitor therapy at a low dose and titrate upwards at short intervals (for example, every 2 weeks) until the target or maximum tolerated dose is reached. [2010] 1.4.4 Measure serum sodium and potassium, and assess renal function, before and 1 to 2 weeks after starting an ACE inhibitor, and after each dose increment. [2010, amended 2018] 1.4.5 Measure blood pressure before and after each dose increment of an ACE inhibitor. Follow the recommendations on measuring blood pressure, including measurement in people with symptoms of postural hypotension, in the NICE guideline on hypertension in adults. [2018] 1.4.6 Once the target or maximum tolerated dose of an ACE inhibitor is reached, monitor treatment monthly for 3 months and then at least every 6 months, and at any time the person becomes acutely unwell. [2010, amended 2018] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 13 of 35 Alternative treatments if ACE inhibitors are not tolerated 1.4.7 Consider an ARB licensed for heart failure as an alternative to an ACE inhibitor for people who have heart failure with reduced ejection fraction and intolerable side effects with ACE inhibitors. [2010] 1.4.8 Measure serum sodium and potassium, and assess renal function, before and after starting an ARB and after each dose increment. [2010, amended 2018] 1.4.9 Measure blood pressure after each dose increment of an ARB. Follow the recommendations on measuring blood pressure, including measurement in people with symptoms of postural hypotension, in the NICE guideline on hypertension in adults. [2018] 1.4.10 Once the target or maximum tolerated dose of an ARB is reached, monitor treatment monthly for 3 months and then at least every 6 months, and at any time the person becomes acutely unwell. [2010, amended 2018] 1.4.11 If neither ACE inhibitors nor ARBs are tolerated, seek specialist advice and consider hydralazine in combination with nitrate for people who have heart failure with reduced ejection fraction. [2010] Beta-blockers 1.4.12 Do not withhold treatment with a beta-blocker solely because of age or the presence of peripheral vascular disease, erectile dysfunction, diabetes, interstitial pulmonary disease or chronic obstructive pulmonary disease. [2010] 1.4.13 Introduce beta-blockers in a 'start low, go slow' manner. Assess heart rate and clinical status after each titration. Measure blood pressure before and after each dose increment of a beta-blocker. [2010,amended 2018] 1.4.14 Switch people whose condition is stable and who are already taking a betablocker for a comorbidity (for example, angina or hypertension), and who develop heart failure with reduced ejection fraction, to a beta-blocker licensed for heart failure. [2010] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 14 of 35 Mineralocorticoid receptor antagonists 1.4.15 Offer an mineralocorticoid receptor antagonists (MRA), in addition to an ACE inhibitor (or ARB) and beta-blocker, to people who have heart failure with reduced ejection fraction if they continue to have symptoms of heart failure. [2018] 1.4.16 Measure serum sodium and potassium, and assess renal function, before and after starting an MRA and after each dose increment. [2018] 1.4.17 Measure blood pressure before and after after each dose increment of an MRA. Follow the recommendations on measuring blood pressure, including measurement in people with symptoms of postural hypotension, in the NICE guideline on hypertension in adults. [2018] 1.4.18 Once the target, or maximum tolerated, dose of an MRA is reached, monitor treatment monthly for 3 months and then at least every 6 months, and at any time the person becomes acutely unwell. [2018] Specialist treatment Ivabradine These recommendations are from the NICE technology appraisal guidance on ivabradine for treating chronic heart failure. 1.4.19 Ivabradine is recommended as an option for treating chronic heart failure for people: • with New York Heart Association (NYHA) class II to IV stable chronic heart failure with systolic dysfunction and • who are in sinus rhythm with a heart rate of 75 beats per minute (bpm) or more and • who are given ivabradine in combination with standard therapy including beta-blocker therapy, angiotensin-converting enzyme (ACE) inhibitors and Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 15 of 35 aldosterone antagonists, or when beta-blocker therapy is contraindicated or not tolerated and • with a left ventricular ejection fraction of 35% or less. [2012] 1.4.20 Ivabradine should only be initiated after a stabilisation period of 4 weeks on optimised standard therapy with ACE inhibitors, beta-blockers and aldosterone antagonists. [2012] 1.4.21 Ivabradine should be initiated by a heart failure specialist with access to a multidisciplinary heart failure team. Dose titration and monitoring should be carried out by a heart failure specialist, or in primary care by either a GP with a special interest in heart failure or a heart failure specialist nurse. [2012] Sacubitril valsartan These recommendations are from the NICE technology appraisal guidance on sacubitril valsartan for treating symptomatic chronic heart failure with reduced ejection fraction. 1.4.22 Sacubitril valsartan is recommended as an option for treating symptomatic chronic heart failure with reduced ejection fraction, only in people: • with New York Heart Association (NYHA) class II to IV symptoms and • with a left ventricular ejection fraction of 35% or less and • who are already taking a stable dose of angiotensin-converting enzyme (ACE) inhibitors or ARBs. [2016] 1.4.23 Treatment with sacubitril valsartan should be started by a heart failure specialist with access to a multidisciplinary heart failure team. Dose titration and monitoring should be performed by the most appropriate team member (see the section on team working in the management of heart failure). [2016] 1.4.24 This guidance is not intended to affect the position of patients whose treatment with sacubitril valsartan was started within the NHS before this guidance was published. Treatment of those patients may continue without change to whatever Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 16 of 35 funding arrangements were in place for them before this guidance was published until they and their NHS clinician consider it appropriate to stop. [2016] Hydralazine in combination with nitrate 1.4.25 Seek specialist advice and consider offering hydralazine in combination with nitrate (especially if the person is of African or Caribbean family origin and has moderate to severe heart failure [NYHA class III/IV] with reduced ejection fraction). [2010] Digoxin For recommendations on digoxin for people with atrial fibrillation see the section on rate and rhythm control in the NICE guideline on atrial fibrillation. 1.4.26 Digoxin is recommended for worsening or severe heart failure with reduced ejection fraction despite first-line treatment for heart failure. Seek specialist advice before initiating. [2010, amended 2018] 1.4.27 Routine monitoring of serum digoxin concentrations is not recommended. A digoxin concentration measured within 8 to 12 hours of the last dose may be useful to confirm a clinical impression of toxicity or non-adherence. [2003] 1.4.28 The serum digoxin concentration should be interpreted in the clinical context as toxicity may occur even when the concentration is within the 'therapeutic range'. [2003] 1.5 Treating heart failure with reduced ejection fraction in people with chronic kidney disease 1.5.1 For people who have heart failure with reduced ejection fraction and chronic kidney disease with an eGFR of 30 ml/min/1.73 m 2 or above: • offer the treatment outlined in the section on treating heart failure with Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 17 of 35 reduced ejection fraction and • if the person's eGFR is 45 ml/min/1.73 m 2 or below, consider lower doses and/ or slower titration of dose of ACE inhibitors or ARBs, MRAs and digoxin. [2018] 1.5.2 For people who have heart failure with reduced ejection fraction and chronic kidney disease with an eGFR below 30 ml/min/1.73 m 2 , the specialist heart failure MDT should consider liaising with a renal physician. [2018] 1.5.3 Monitor the response to titration of medicines closely in people who have heart failure with reduced ejection fraction and chronic kidney disease, taking into account the increased risk of hyperkalaemia. [2018] 1.6 Managing all types of heart failure When managing pharmacological treatment, follow the recommendations in the NICE guidelines on medicines adherence and medicines optimisation. Pharmacological treatment Diuretics 1.6.1 Diuretics should be routinely used for the relief of congestive symptoms and fluid retention in people with heart failure, and titrated (up and down) according to need following the initiation of subsequent heart failure therapies. [2003] 1.6.2 People who have heart failure with preserved ejection fraction should usually be offered a low to medium dose of loop diuretics (for example, less than 80 mg furosemide per day). People whose heart failure does not respond to this treatment will need further specialist advice. [2003, amended 2018] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 18 of 35 Calcium-channel blockers 1.6.3 Avoid verapamil, diltiazem and short-acting dihydropyridine agents in people who have heart failure with reduced ejection fraction. [2003, amended 2018] Amiodarone 1.6.4 Make the decision to prescribe amiodarone in consultation with a specialist. [2003] 1.6.5 Review the need to continue the amiodarone prescription at the 6-monthly clinical review. [2003, amended 2018] 1.6.6 Offer people taking amiodarone liver and thyroid function tests, and a review of side effects, as part of their routine 6-monthly clinical review. [2003, amended 2018] Anticoagulants 1.6.7 For people who have heart failure and atrial fibrillation, follow the recommendations on anticoagulation in the NICE guideline on atrial fibrillation. Be aware of the effects of impaired renal and liver function on anticoagulant therapies. [2018] 1.6.8 In people with heart failure in sinus rhythm, anticoagulation should be considered for those with a history of thromboembolism, left ventricular aneurysm or intracardiac thrombus. [2003] Vaccinations 1.6.9 Offer people with heart failure an annual vaccination against influenza. [2003] 1.6.10 Offer people with heart failure vaccination against pneumococcal disease (only required once). [2003] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 19 of 35 Contraception and pregnancy 1.6.11 In women of childbearing potential who have heart failure, contraception and pregnancy should be discussed. If pregnancy is being considered or occurs, specialist advice should be sought. Subsequently, specialist care should be shared between the cardiologist and obstetrician. [2003] Depression See NICE's guideline on depression in adults with a chronic physical health problem. Lifestyle advice Salt and fluid restriction 1.6.12 Do not routinely advise people with heart failure to restrict their sodium or fluid consumption. Ask about salt and fluid consumption and, if needed, advise as follows: • restricting fluids for people with dilutional hyponatraemia • reducing intake for people with high levels of salt and/or fluid consumption. Continue to review the need to restrict salt or fluid. [2018] 1.6.13 Advise people with heart failure to avoid salt substitutes that contain potassium. [2018] Smoking and alcohol See NICE's guidance on smoking and tobacco and alcohol. Air travel 1.6.14 Air travel will be possible for the majority of people with heart failure, depending Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 20 of 35 on their clinical condition at the time of travel. [2003] Driving 1.6.15 Large Goods Vehicle and Passenger Carrying Vehicle licence: physicians should be up to date with the latest Driver and Vehicle Licensing Agency (DVLA) guidelines. Check the DVLA website for regular updates. [2003] 1.7 Monitoring treatment for all types of heart failure See the section on treating heart failure with reduced ejection fraction for specific recommendations on monitoring treatment for heart failure with reduced ejection fraction. Clinical review 1.7.1 All people with chronic heart failure need monitoring. This monitoring should include: • a clinical assessment of functional capacity, fluid status, cardiac rhythm (minimum of examining the pulse), cognitive status and nutritional status • a review of medication, including need for changes and possible side effects • an assessment of renal function. Note: This is a minimum. People with comorbidities or co-prescribed medications will need further monitoring. Monitoring serum potassium is particularly important if a person is taking digoxin or an MRA. [2010, amended 2018] 1.7.2 More detailed monitoring will be needed if the person has significant comorbidity or if their condition has deteriorated since the previous review. [2003] 1.7.3 The frequency of monitoring should depend on the clinical status and stability of Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 21 of 35 the person. The monitoring interval should be short (days to 2 weeks) if the clinical condition or medication has changed, but is needed at least 6-monthly for stable people with proven heart failure. [2003] 1.7.4 People with heart failure who wish to be involved in monitoring of their condition should be provided with sufficient education and support from their healthcare professional to do this, with clear guidelines as to what to do in the event of deterioration. [2003] Measuring NT-proBNP 1.7.5 Consider measuring NT-proBNP (N-terminal pro-B-type natriuretic peptide) as part of a treatment optimisation protocol only in a specialist care setting for people aged under 75 who have heart failure with reduced ejection fraction and an eGFR above 60 ml/min/1.73 m 2 . [2018] 1.8 Interventional procedures Coronary revascularisation 1.8.1 Do not routinely offer coronary revascularisation to people who have heart failure with reduced ejection fraction and coronary artery disease. [2018] Cardiac transplantation 1.8.2 Specialist referral for transplantation should be considered for people with severe refractory symptoms or refractory cardiogenic shock. [2003] Implantable cardioverter defibrillators and cardiac resynchronisation therapy See NICE's technology appraisal guidance on implantable cardioverter defibrillators and cardiac resynchronisation therapy for arrhythmias and heart failure. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 22 of 35 1.8.3 When discussing implantation of a cardioverter defibrillator: • explain the risks, benefits and consequences of cardioverter defibrillator implantation, following the principles on shared decision making in the NICE guideline on patient experience in adult NHS services • ensure the person knows that the defibrillator function can be deactivated without affecting any cardiac resynchronisation or pacing, and reactivated later • explain the circumstances in which deactivation might be offered • discuss and dispel common misconceptions about the function of the device and the consequences of deactivation • provide the person and, if they wish, their family or carers with written information covering the information discussed. [2018] 1.8.4 Review the benefits and potential harms of a cardioverter defibrillator remaining active in a person with heart failure: • at each 6-monthly review of their heart failure care • whenever their care goals change • as part of advance care planning if it is thought they are nearing the end of life. [2018] 1.9 Cardiac rehabilitation 1.9.1 Offer people with heart failure a personalised, exercise-based cardiac rehabilitation programme, unless their condition is unstable. The programme: • should be preceded by an assessment to ensure that it is suitable for the person • should be provided in a format and setting (at home, in the community or in the hospital) that is easily accessible for the person Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 23 of 35 • should include a psychological and educational component • may be incorporated within an existing cardiac rehabilitation programme • should be accompanied by information about support available from healthcare professionals when the person is doing the programme. [2018] 1.10 Palliative care 1.10.1 Do not offer long-term home oxygen therapy for advanced heart failure. Be aware that long-term home oxygen therapy may be offered for comorbidities, such as for some people with chronic obstructive pulmonary disease (see the section on oxygen in the NICE guideline on chronic obstructive pulmonary disease in over 16s). [2018] 1.10.2 Do not use prognostic risk tools to determine whether to refer a person with heart failure to palliative care services. [2018] 1.10.3 If the symptoms of a person with heart failure are worsening despite optimal specialist treatment, discuss their palliative care needs with the specialist heart failure multidisciplinary team and consider a needs assessment for palliative care. [2018] 1.10.4 People with heart failure and their families or carers should have access to professionals with palliative care skills within the heart failure team. [2003] 1.10.5 If it is thought that a person may be entering the last 2 to 3 days of life, follow the NICE guideline on care of dying adults in the last days of life. [2018] Terms used in this guideline Heart failure with preserved ejection fraction This is usually associated with impaired left ventricular relaxation, rather than left ventricular contraction, and is characterised by normal or preserved left ventricular Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 24 of 35 ejection fraction with evidence of diastolic dysfunction . Heart failure with reduced ejection fraction Heart failure with an ejection fraction below 40%. Mineralocorticoid receptor antagonist A drug that antagonises the action of aldosterone at mineralocorticoid receptors. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 25 of 35 Putting this guideline into practice NICE has produced tools and resources to help you put this guideline into practice. Putting recommendations into practice can take time. How long may vary from guideline to guideline, and depends on how much change in practice or services is needed. Implementing change is most effective when aligned with local priorities. Changes recommended for clinical practice that can be done quickly – like changes in prescribing practice – should be shared quickly. This is because healthcare professionals should use guidelines to guide their work – as is required by professional regulating bodies such as the General Medical and Nursing and Midwifery Councils. Changes should be implemented as soon as possible, unless there is a good reason for not doing so (for example, if it would be better value for money if a package of recommendations were all implemented at once). Different organisations may need different approaches to implementation, depending on their size and function. Sometimes individual practitioners may be able to respond to recommendations to improve their practice more quickly than large organisations. Here are some pointers to help organisations put NICE guidelines into practice: 1. Raise awareness through routine communication channels, such as email or newsletters, regular meetings, internal staff briefings and other communications with all relevant partner organisations. Identify things staff can include in their own practice straight away. 2. Identify a lead with an interest in the topic to champion the guideline and motivate others to support its use and make service changes, and to find out any significant issues locally. 3. Carry out a baseline assessment against the recommendations to find out whether there are gaps in current service provision. 4. Think about what data you need to measure improvement and plan how you will collect it. You may want to work with other health and social care organisations and specialist Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 26 of 35 groups to compare current practice with the recommendations. This may also help identify local issues that will slow or prevent implementation. 5. Develop an action plan, with the steps needed to put the guideline into practice, and make sure it is ready as soon as possible. Big, complex changes may take longer to implement, but some may be quick and easy to do. An action plan will help in both cases. 6. For very big changes include milestones and a business case, which will set out additional costs, savings and possible areas for disinvestment. A small project group could develop the action plan. The group might include the guideline champion, a senior organisational sponsor, staff involved in the associated services, finance and information professionals. 7. Implement the action plan with oversight from the lead and the project group. Big projects may also need project management support. 8. Review and monitor how well the guideline is being implemented through the project group. Share progress with those involved in making improvements, as well as relevant boards and local partners. NICE provides a comprehensive programme of support and resources to maximise uptake and use of evidence and guidance. See NICE's into practice pages for more information. Also see Leng G, Moore V, Abraham S, editors (2014) Achieving high quality care – practical experience from NICE. Chichester: Wiley. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 27 of 35 Recommendations for research The guideline committee has made the following key recommendations for research. The committee's full set of research recommendations is detailed in the full guideline. 1 Diuretic therapy for managing fluid overload in people with advanced heart failure in the community In people with advanced heart failure and significant peripheral fluid overload, what is the clinical and cost effectiveness of oral, subcutaneous and intravenous diuretic therapy in the community? Why this is important This research is critical to inform practice of how best to manage people with advanced heart failure in the community if they develop significant peripheral fluid overload. These people are more likely to have multiple admissions that, together with fluid overload, have a negative impact on their quality of life. Management in the community can minimise disruption for the person and reduce costs from hospital admissions. Knowledge of the most clinically and cost-effective routes of administration for diuretic therapy will dictate the level of resource needed to provide the service. Intravenous and subcutaneous diuretics usually need to be administered by nursing or healthcare staff. although a pump for self-administration of subcutaneous diuretics has recently been developed. Oral formulations can be self-administered. 2 Cardiac MRI versus other imaging techniques for diagnosing heart failure What is the optimal imaging technique for the diagnosis of heart failure? Why this is important The role of cardiac MRI in the detection and characterisation of several structural and Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 28 of 35 functional cardiac abnormalities has become well established over the past 25 years. In people with heart failure, cardiac MRI provides reliable and reproducible assessments of the left ventricular (and to a degree the right ventricular) shapes, volumes and ejection fractions. It also provides spatial assessments of the congenital and acquired structural abnormalities of the heart and their interrelationships with the remainder of the heart, as well as functional and haemodynamic assessments of these abnormalities on the heart's performance. Finally, cardiac MRI provides valuable information about the myocardial structure and metabolism, including the presence of inflammation, scarring, fibrosis and infiltration. Cardiac MRI is an expensive form of imaging, and much of this diagnostic information could be provided by less costly non-invasive imaging techniques, chiefly echocardiography. This question aims to find the most clinically and cost-effective imaging technique for the clinical diagnosis of heart failure. 3 The impact of atrial fibrillation on the natriuretic peptide threshold for diagnosing heart failure What is the optimal NT-proBNP threshold for the diagnosis of heart failure in people with atrial fibrillation? Why this is important Atrial fibrillation is a common arrhythmia in the general population, and occurs in 30 to 40% of people with heart failure. Atrial fibrillation can raise the level of serum natriuretic peptides, including NT-proBNP, even in the absence of heart failure. This is complicated further in heart failure with preserved ejection fraction, in which 2 echocardiographic diagnostic criteria become unreliable (the left atrial volume and the tissue doppler imaging assessment of diastolic function). These factors contribute to the complexity of the diagnosis and have a potential impact on the usual thresholds for NT-proBNP in people who have atrial fibrillation. This has been recognised in several ongoing randomised controlled trials of heart failure, which are using higher NT-proBNP thresholds for the diagnosis of heart failure in people with atrial fibrillation. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 29 of 35 4 The impact of advanced kidney disease on the natriuretic peptide threshold for diagnosing heart failure What are the optimal NT-proBNP thresholds for diagnosing heart failure in people with stage IIIb, IV or V chronic kidney disease? Why this is important Heart failure incidence and prevalence increase with age, with the rise starting at age 65 and peaking between 75 and 85. Both advancing age and heart failure are associated with a gradual and progressive decline in renal function. In addition, the progression of heart failure and some treatments for heart failure lead to progressive deterioration of renal function. A decline in renal function is associated with increased fluid retention and a rise in the level of the serum natriuretic peptides, including NT-proBNP, even in the absence of heart failure. There is some evidence that the use of higher NT-proBNP thresholds would improve diagnostic accuracy for heart failure in people with significant deterioration of creatinine clearance. 5 Risk tools for predicting non-sudden death in heart failure What is the most accurate prognostic risk tool in predicting 1-year mortality from heart failure at specific clinically relevant thresholds (for example, sensitivity, specificity, negative predictive value and positive predictive value at a threshold of 50% risk of mortality at 1 year)? Why this is important There are a number of validated prognostic risk tools for heart failure but most do not report sensitivity and specificity at clinically relevant thresholds. This information is crucial to enable accurate prediction of a person's risk of mortality. The ability to accurately predict a person's prognosis would allow clearer communication and timely referral to other services such as palliative care. Inaccurate prediction has the potential to lead to significant psychological harm and increased morbidity. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 30 of 35 Context Key facts and figures Heart failure is a complex clinical syndrome of symptoms and signs that suggest the efficiency of the heart as a pump is impaired. It is caused by structural or functional abnormalities of the heart. Around 920,000 people in the UK today have been diagnosed with heart failure. Both the incidence and prevalence of heart failure increase steeply with age, and the average age at diagnosis is 77. Improvements in care have increased survival for people with ischaemic heart disease, and treatments for heart failure have become more effective. But the overall prevalence of heart failure is rising because of population ageing and increasing rates of obesity. Current practice Uptake of NICE's 2010 guidance on chronic heart failure appears to be good. However, the Department of Health and Social Care's policy paper on improving cardiovascular disease outcomes: strategy noted that prescribing of ACE inhibitors, beta-blockers and aldosterone antagonists remains suboptimal, and that improved use of these drugs has the potential to reduce hospitalisations and deaths caused by heart failure. This update reviewed evidence on the clinical and cost effectiveness of these therapies. Interdisciplinary working has contributed to better outcomes in heart failure but there is further room to improve the provision of multidisciplinary teams (MDTs) and integrate them more fully into healthcare processes. This update highlights and further expands on the roles of the MDT and collaboration between the MDT and the primary care team. The Department of Health and Social Care's policy paper on improving cardiovascular disease outcomes: strategy also noted that the proportion of people with heart failure who have cardiac rehabilitation was around 4%, and that increasing this proportion would reduce mortality and hospitalisation. This update recommends that all people with heart failure are offered an easily accessible, exercise-based cardiac rehabilitation programme, if this is suitable for them. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 31 of 35 Finding more information and committee details To find out what NICE has said on related topics, including guidance in development, see the NICE topic page on cardiovascular conditions. For full details of the evidence and the guideline committee's discussions, see the full guideline. You can also find information about how the guideline was developed, including details of the committee. NICE has produced tools and resources to help you put this guideline into practice. For general help and advice on putting our guidelines into practice, see resources to help you put NICE guidance into practice. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 32 of 35 Update information September 2018: This guideline updates and replaces NICE clinical guideline 108 (published August 2010). NICE clinical guideline 108 updated and replaced NICE clinical guideline 5 (published July 2003). Recommendations are marked as [2018], [2016], [2012], [2010], [2010, amended 2018], [2003], [2003, amended 2018] or [2003, amended 2010], [2018] indicates that the evidence was reviewed and the recommendation added, updated or unchanged in 2018. [2016] refers to NICE technology appraisal guidance published in 2016. [2012] refers to NICE technology appraisal guidance published in 2012. [2010] indicates that the evidence was reviewed in 2010. [2010, amended 2018] indicates that the evidence was reviewed in 2010 but changes were made to the recommendation wording in 2018 that changed the meaning. [2003] indicates that the evidence was reviewed in 2003. [2003, amended 2018] indicates that the evidence was reviewed in 2003 but changes were made to the recommendation wording in 2018 that changed the meaning. [2003, amended 2010] indicates that the evidence was reviewed in 2003 but changes were made to the recommendation wording in 2010 that changed the meaning. • 'Heart failure due to left ventricular systolic dysfunction (LVSD)' has been replaced in all recommendations by 'heart failure with reduced ejection fraction' in line with current terminology and the 2018 guideline scope. • 'Aldosterone antagonists' has been replaced in all recommendations by 'mineralocorticoid receptor antagonists (MRAs') to clarify the function of the receptor, and in line with the 2018 guideline scope. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 33 of 35 • 'African or African-Caribbean family origin' has been added to recommendation 1.2.7 because of the high incidence of heart failure with preserved ejection fraction in these populations. Recent evidence shows that NT-proBNP levels are lower in people of west African family background and are a confounder in the diagnosis of heart failure. • Doppler 2D has been deleted from recommendations 1.2.8, 1.2.9 and 1.2.11 because all transthoracic echocardiography would have doppler 2D as a minimum and it is no longer necessary to specify this. • 'Multigated acquisition scanning' has been added to recommendation 1.2.11 to reflect current imaging technology. • Measurement of urea has been deleted from recommendations 1.2.12, 1.4.8 and 1.7.1 because the guideline committee agreed that it is not needed and is not part of renal function profiles in most centres in the UK. Blood tests for electrolytes, creatinine and eGFR have been grouped together under the term 'renal function profile' because they are provided as a unified set of analyses in the NHS. The term 'profile' is applied to a group of tests (assays). Thus these tests are more accurately described as 'profiles' as they contain multiple individual assays and have replaced thyroid function test, liver function test and lipid measurement. 'Fasting glucose' has been replaced by 'glycosylated haemoglobin (HbA1c)' in line with the NICE guidelines on diabetes. • Measurement of serum urea has been deleted from recommendation 1.4.4 because the guideline committee agreed that it is not needed and is not part of renal function profiles in most centres in the UK. Measurement of potassium has been added to ensure that monitoring is consistent across treatments. • Recommendations 1.4.6 and 1.4.10 have been added to clarify the timing of monitoring after treatment starts. • In recommendation 1.4.8, monitoring for hyperkalaemia has been replaced by potassium measurement for clarity. • Blood pressure measurement has been clarified in recommendation 1.4.13 and made consistent with other treatments. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 34 of 35 • As a result of new evidence the treatment pathway for heart failure with reduced ejection fraction in recommendation 1.4.26 has been amended. Second line treatment has been replaced by specialist treatment. A sentence has been added to clarify that specialist advice should be sought before starting treatment with digoxin. • The first part of recommendation 1.6.2 has been removed because it is now covered in section 1.1 on team working in the management of heart failure. • Amlodipine to treat hypertension has been deleted from recommendation 1.6.3 because it has been superseded by the NICE guideline on hypertension in adults. • 'Regularly' has been replaced by 'at the 6-monthly clinical review' in recommendation 1.6.5 for clarification. • The wording in recommendation 1.6.6 has been amended in line with recommendation 1.6.5. Minor changes since publication April 2022: In section 1.4 we added links to NICE's technology appraisal guidance on dapagliflozin and empagliflozin for treating chronic heart failure with reduced ejection fraction. November 2021: We added a link to the NICE guideline on heart valve disease in recommendations 1.2.8, 1.2.15 and 1.4.2. ISBN: 978-1-4731-3093-7 Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 35 of 35
| Response should not be more than 100 words.
Model must only respond using information contained in the context block.
Model should not rely on its own knowledge or outside sources of information when responding.
What medications should be prescribed first for adults diagnosed with heart failure with reduced ejection fraction according to the NICE guidelines?
Chronic heart failure in adults: diagnosis and management NICE guideline Published: 12 September 2018 www.nice.org.uk/guidance/ng106 © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Your responsibility The recommendations in this guideline represent the view of NICE, arrived at after careful consideration of the evidence available. When exercising their judgement, professionals and practitioners are expected to take this guideline fully into account, alongside the individual needs, preferences and values of their patients or the people using their service. It is not mandatory to apply the recommendations, and the guideline does not override the responsibility to make decisions appropriate to the circumstances of the individual, in consultation with them and their families and carers or guardian. All problems (adverse events) related to a medicine or medical device used for treatment or in a procedure should be reported to the Medicines and Healthcare products Regulatory Agency using the Yellow Card Scheme. Local commissioners and providers of healthcare have a responsibility to enable the guideline to be applied when individual professionals and people using services wish to use it. They should do so in the context of local and national priorities for funding and developing services, and in light of their duties to have due regard to the need to eliminate unlawful discrimination, to advance equality of opportunity and to reduce health inequalities. Nothing in this guideline should be interpreted in a way that would be inconsistent with complying with those duties. Commissioners and providers have a responsibility to promote an environmentally sustainable health and care system and should assess and reduce the environmental impact of implementing NICE recommendations wherever possible. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 2 of 35 Contents Overview ...................................................................................................................................... 5 Who is it for? .......................................................................................................................................... 5 Recommendations ....................................................................................................................... 6 1.1 Team working in the management of heart failure ....................................................................... 6 1.2 Diagnosing heart failure .................................................................................................................. 9 1.3 Giving information to people with heart failure ............................................................................ 12 1.4 Treating heart failure with reduced ejection fraction .................................................................. 12 1.5 Treating heart failure with reduced ejection fraction in people with chronic kidney disease . 17 1.6 Managing all types of heart failure ................................................................................................ 18 1.7 Monitoring treatment for all types of heart failure ....................................................................... 21 1.8 Interventional procedures ............................................................................................................... 22 1.9 Cardiac rehabilitation ...................................................................................................................... 23 1.10 Palliative care ................................................................................................................................. 24 Terms used in this guideline ................................................................................................................. 24 Putting this guideline into practice ............................................................................................ 26 Recommendations for research ................................................................................................. 28 1 Diuretic therapy for managing fluid overload in people with advanced heart failure in the community ............................................................................................................................................. 28 2 Cardiac MRI versus other imaging techniques for diagnosing heart failure ................................ 28 3 The impact of atrial fibrillation on the natriuretic peptide threshold for diagnosing heart failure ...................................................................................................................................................... 29 4 The impact of advanced kidney disease on the natriuretic peptide threshold for diagnosing heart failure ............................................................................................................................................ 30 5 Risk tools for predicting non-sudden death in heart failure .......................................................... 30 Context ......................................................................................................................................... 31 Key facts and figures ............................................................................................................................ 31 Current practice .................................................................................................................................... 31 Finding more information and committee details .....................................................................32 Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 3 of 35 Update information .....................................................................................................................33 Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 4 of 35 This guideline replaces CG108. This guideline is the basis of QS167, QS9 and QS181. Overview This guideline covers diagnosing and managing chronic heart failure in people aged 18 and over. It aims to improve diagnosis and treatment to increase the length and quality of life for people with heart failure. NICE has also produced a guideline on acute heart failure. Who is it for? • Healthcare professionals • People with heart failure and their families and carers Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 5 of 35 Recommendations People have the right to be involved in discussions and make informed decisions about their care, as described in NICE's information on making decisions about your care. Making decisions using NICE guidelines explains how we use words to show the strength (or certainty) of our recommendations, and has information about prescribing medicines (including off-label use), professional guidelines, standards and laws (including on consent and mental capacity), and safeguarding. 1.1 Team working in the management of heart failure 1.1.1 The core specialist heart failure multidisciplinary team (MDT) should work in collaboration with the primary care team, and should include: • a lead physician with subspecialty training in heart failure (usually a consultant cardiologist) who is responsible for making the clinical diagnosis • a specialist heart failure nurse • a healthcare professional with expertise in specialist prescribing for heart failure. [2018] 1.1.2 The specialist heart failure MDT should: • diagnose heart failure • give information to people newly diagnosed with heart failure (see the section on giving information to people with heart failure) • manage newly diagnosed, recently decompensated or advanced heart failure (NYHA [New York Heart Association] class III to IV) Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 6 of 35 • optimise treatment • start new medicines that need specialist supervision • continue to manage heart failure after an interventional procedure such as implantation of a cardioverter defibrillator or cardiac resynchronisation device • manage heart failure that is not responding to treatment. [2018] 1.1.3 The specialist heart failure MDT should directly involve, or refer people to, other services, including rehabilitation, services for older people and palliative care services, as needed. [2018] 1.1.4 The primary care team should carry out the following for people with heart failure at all times, including periods when the person is also receiving specialist heart failure care from the MDT: • ensure effective communication links between different care settings and clinical services involved in the person's care • lead a full review of the person's heart failure care, which may form part of a long-term conditions review • recall the person at least every 6 months and update the clinical record • ensure that changes to the clinical record are understood and agreed by the person with heart failure and shared with the specialist heart failure MDT • arrange access to specialist heart failure services if needed. [2018] Care after an acute event For recommendations on the diagnosis and management of acute heart failure, see the NICE guideline on acute heart failure. 1.1.5 People with heart failure should generally be discharged from hospital only when their clinical condition is stable and the management plan is optimised. Timing of discharge should take into account the wishes of the person and their family or Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 7 of 35 carer, and the level of care and support that can be provided in the community. [2003] 1.1.6 The primary care team should take over routine management of heart failure as soon as it has been stabilised and its management optimised. [2018] Writing a care plan 1.1.7 The specialist heart failure MDT should write a summary for each person with heart failure that includes: • diagnosis and aetiology • medicines prescribed, monitoring of medicines, when medicines should be reviewed and any support the person needs to take the medicines • functional abilities and any social care needs • social circumstances, including carers' needs. [2018] 1.1.8 The summary should form the basis of a care plan for each person, which should include: • plans for managing the person's heart failure, including follow-up care, rehabilitation and access to social care • symptoms to look out for in case of deterioration • a process for any subsequent access to the specialist heart failure MDT if needed • contact details for - a named healthcare coordinator (usually a specialist heart failure nurse) - alternative local heart failure specialist care providers, for urgent care or review. • additional sources of information for people with heart failure. [2018] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 8 of 35 1.1.9 Give a copy of the care plan to the person with heart failure, their family or carer if appropriate, and all health and social care professionals involved in their care. [2018] 1.2 Diagnosing heart failure Symptoms, signs and investigations 1.2.1 Take a careful and detailed history, and perform a clinical examination and tests to confirm the presence of heart failure. [2010] 1.2.2 Measure N-terminal pro-B-type natriuretic peptide (NT-proBNP) in people with suspected heart failure. [2018] 1.2.3 Because very high levels of NT-proBNP carry a poor prognosis, refer people with suspected heart failure and an NT-proBNP level above 2,000 ng/litre (236 pmol/ litre) urgently, to have specialist assessment and transthoracic echocardiography within 2 weeks. [2018] 1.2.4 Refer people with suspected heart failure and an NT-proBNP level between 400 and 2,000 ng/litre (47 to 236 pmol/litre) to have specialist assessment and transthoracic echocardiography within 6 weeks. [2018] 1.2.5 Be aware that: • an NT-proBNP level less than 400 ng/litre (47 pmol/litre) in an untreated person makes a diagnosis of heart failure less likely • the level of serum natriuretic peptide does not differentiate between heart failure with reduced ejection fraction and heart failure with preserved ejection fraction. [2018] 1.2.6 Review alternative causes for symptoms of heart failure in people with NTproBNP levels below 400 ng/litre. If there is still concern that the symptoms might be related to heart failure, discuss with a physician with subspeciality training in heart failure. [2018] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 9 of 35 1.2.7 Be aware that: • obesity, African or African–Caribbean family background, or treatment with diuretics, angiotensin-converting enzyme (ACE) inhibitors, beta-blockers, angiotensin II receptor blockers (ARBs) or mineralocorticoid receptor antagonists (MRAs) can reduce levels of serum natriuretic peptides • high levels of serum natriuretic peptides can have causes other than heart failure (for example, age over 70 years, left ventricular hypertrophy, ischaemia, tachycardia, right ventricular overload, hypoxaemia [including pulmonary embolism], renal dysfunction [eGFR less than 60 ml/minute/ 1.73 m 2 ], sepsis, chronic obstructive pulmonary disease, diabetes, or cirrhosis of the liver). [2010, amended 2018] 1.2.8 Perform transthoracic echocardiography to exclude important valve disease, assess the systolic (and diastolic) function of the (left) ventricle, and detect intracardiac shunts. See the section on referral for echocardiography and specialist assessment in the NICE guideline on heart valve disease. [2003, amended 2018] 1.2.9 Transthoracic echocardiography should be performed on high-resolution equipment by experienced operators trained to the relevant professional standards. Need and demand for these studies should not compromise quality. [2003, amended 2018] 1.2.10 Ensure that those reporting echocardiography are experienced in doing so. [2003] 1.2.11 Consider alternative methods of imaging the heart (for example, radionuclide angiography [multigated acquisition scanning], cardiac MRI or transoesophageal echocardiography) if a poor image is produced by transthoracic echocardiography. [2003, amended 2018] 1.2.12 Perform an ECG and consider the following tests to evaluate possible aggravating factors and/or alternative diagnoses: • chest X-ray • blood tests: Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 10 of 35 - renal function profile - thyroid function profile - liver function profile - lipid profile - glycosylated haemoglobin (HbA1c) - full blood count • urinalysis • peak flow or spirometry. [2010, amended 2018] 1.2.13 Try to exclude other disorders that may present in a similar manner. [2003] 1.2.14 When a diagnosis of heart failure has been made, assess severity, aetiology, precipitating factors, type of cardiac dysfunction and correctable causes. [2010] Heart failure caused by valve disease 1.2.15 Refer people with heart failure caused by valve disease for specialist assessment and advice regarding follow-up. See the section on referral for echocardiography and specialist assessment in the NICE guideline on heart valve disease. [2003] Reviewing existing diagnoses 1.2.16 Review the basis for a historical diagnosis of heart failure, and manage care in accordance with this guideline only if the diagnosis is confirmed. [2003] 1.2.17 If the diagnosis of heart failure is still suspected, but confirmation of the underlying cardiac abnormality has not occurred, then the person should have appropriate further investigation. [2003] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 11 of 35 1.3 Giving information to people with heart failure 1.3.1 When giving information to people with heart failure, follow the recommendations in the NICE guideline on patient experience in adult NHS services. [2018] 1.3.2 Discuss the person's prognosis in a sensitive, open and honest manner. Be frank about the uncertainty in predicting the course of their heart failure. Revisit this discussion as the person's condition evolves. [2018] 1.3.3 Provide information whenever needed throughout the person's care. [2018] 1.3.4 Consider training in advanced communication skills for all healthcare professionals working with people who have heart failure. [2018] First consultations for people newly diagnosed with heart failure 1.3.5 The specialist heart failure MDT should offer people newly diagnosed with heart failure an extended first consultation, followed by a second consultation to take place within 2 weeks if possible. At each consultation: • discuss the person's diagnosis and prognosis • explain heart failure terminology • discuss treatments • address the risk of sudden death, including any misconceptions about that risk • encourage the person and their family or carers to ask any questions they have. [2018] 1.4 Treating heart failure with reduced ejection fraction See the section on managing all types of heart failure for general recommendations on managing all types of heart failure. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 12 of 35 See NICE's technology appraisal guidance on dapagliflozin and empagliflozin for treating chronic heart failure with reduced ejection fraction. When managing pharmacological treatment, follow the recommendations in the NICE guidelines on medicines adherence and medicines optimisation. First-line treatment 1.4.1 Offer an angiotensin-converting enzyme (ACE) inhibitor and a beta-blocker licensed for heart failure to people who have heart failure with reduced ejection fraction. Use clinical judgement when deciding which drug to start first. [2010] ACE inhibitors 1.4.2 Do not offer ACE inhibitor therapy if there is a clinical suspicion of haemodynamically significant valve disease until the valve disease has been assessed by a specialist. See the section on referral for echocardiography and specialist assessment in the NICE guideline on heart valve disease. [2003] 1.4.3 Start ACE inhibitor therapy at a low dose and titrate upwards at short intervals (for example, every 2 weeks) until the target or maximum tolerated dose is reached. [2010] 1.4.4 Measure serum sodium and potassium, and assess renal function, before and 1 to 2 weeks after starting an ACE inhibitor, and after each dose increment. [2010, amended 2018] 1.4.5 Measure blood pressure before and after each dose increment of an ACE inhibitor. Follow the recommendations on measuring blood pressure, including measurement in people with symptoms of postural hypotension, in the NICE guideline on hypertension in adults. [2018] 1.4.6 Once the target or maximum tolerated dose of an ACE inhibitor is reached, monitor treatment monthly for 3 months and then at least every 6 months, and at any time the person becomes acutely unwell. [2010, amended 2018] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 13 of 35 Alternative treatments if ACE inhibitors are not tolerated 1.4.7 Consider an ARB licensed for heart failure as an alternative to an ACE inhibitor for people who have heart failure with reduced ejection fraction and intolerable side effects with ACE inhibitors. [2010] 1.4.8 Measure serum sodium and potassium, and assess renal function, before and after starting an ARB and after each dose increment. [2010, amended 2018] 1.4.9 Measure blood pressure after each dose increment of an ARB. Follow the recommendations on measuring blood pressure, including measurement in people with symptoms of postural hypotension, in the NICE guideline on hypertension in adults. [2018] 1.4.10 Once the target or maximum tolerated dose of an ARB is reached, monitor treatment monthly for 3 months and then at least every 6 months, and at any time the person becomes acutely unwell. [2010, amended 2018] 1.4.11 If neither ACE inhibitors nor ARBs are tolerated, seek specialist advice and consider hydralazine in combination with nitrate for people who have heart failure with reduced ejection fraction. [2010] Beta-blockers 1.4.12 Do not withhold treatment with a beta-blocker solely because of age or the presence of peripheral vascular disease, erectile dysfunction, diabetes, interstitial pulmonary disease or chronic obstructive pulmonary disease. [2010] 1.4.13 Introduce beta-blockers in a 'start low, go slow' manner. Assess heart rate and clinical status after each titration. Measure blood pressure before and after each dose increment of a beta-blocker. [2010,amended 2018] 1.4.14 Switch people whose condition is stable and who are already taking a betablocker for a comorbidity (for example, angina or hypertension), and who develop heart failure with reduced ejection fraction, to a beta-blocker licensed for heart failure. [2010] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 14 of 35 Mineralocorticoid receptor antagonists 1.4.15 Offer an mineralocorticoid receptor antagonists (MRA), in addition to an ACE inhibitor (or ARB) and beta-blocker, to people who have heart failure with reduced ejection fraction if they continue to have symptoms of heart failure. [2018] 1.4.16 Measure serum sodium and potassium, and assess renal function, before and after starting an MRA and after each dose increment. [2018] 1.4.17 Measure blood pressure before and after after each dose increment of an MRA. Follow the recommendations on measuring blood pressure, including measurement in people with symptoms of postural hypotension, in the NICE guideline on hypertension in adults. [2018] 1.4.18 Once the target, or maximum tolerated, dose of an MRA is reached, monitor treatment monthly for 3 months and then at least every 6 months, and at any time the person becomes acutely unwell. [2018] Specialist treatment Ivabradine These recommendations are from the NICE technology appraisal guidance on ivabradine for treating chronic heart failure. 1.4.19 Ivabradine is recommended as an option for treating chronic heart failure for people: • with New York Heart Association (NYHA) class II to IV stable chronic heart failure with systolic dysfunction and • who are in sinus rhythm with a heart rate of 75 beats per minute (bpm) or more and • who are given ivabradine in combination with standard therapy including beta-blocker therapy, angiotensin-converting enzyme (ACE) inhibitors and Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 15 of 35 aldosterone antagonists, or when beta-blocker therapy is contraindicated or not tolerated and • with a left ventricular ejection fraction of 35% or less. [2012] 1.4.20 Ivabradine should only be initiated after a stabilisation period of 4 weeks on optimised standard therapy with ACE inhibitors, beta-blockers and aldosterone antagonists. [2012] 1.4.21 Ivabradine should be initiated by a heart failure specialist with access to a multidisciplinary heart failure team. Dose titration and monitoring should be carried out by a heart failure specialist, or in primary care by either a GP with a special interest in heart failure or a heart failure specialist nurse. [2012] Sacubitril valsartan These recommendations are from the NICE technology appraisal guidance on sacubitril valsartan for treating symptomatic chronic heart failure with reduced ejection fraction. 1.4.22 Sacubitril valsartan is recommended as an option for treating symptomatic chronic heart failure with reduced ejection fraction, only in people: • with New York Heart Association (NYHA) class II to IV symptoms and • with a left ventricular ejection fraction of 35% or less and • who are already taking a stable dose of angiotensin-converting enzyme (ACE) inhibitors or ARBs. [2016] 1.4.23 Treatment with sacubitril valsartan should be started by a heart failure specialist with access to a multidisciplinary heart failure team. Dose titration and monitoring should be performed by the most appropriate team member (see the section on team working in the management of heart failure). [2016] 1.4.24 This guidance is not intended to affect the position of patients whose treatment with sacubitril valsartan was started within the NHS before this guidance was published. Treatment of those patients may continue without change to whatever Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 16 of 35 funding arrangements were in place for them before this guidance was published until they and their NHS clinician consider it appropriate to stop. [2016] Hydralazine in combination with nitrate 1.4.25 Seek specialist advice and consider offering hydralazine in combination with nitrate (especially if the person is of African or Caribbean family origin and has moderate to severe heart failure [NYHA class III/IV] with reduced ejection fraction). [2010] Digoxin For recommendations on digoxin for people with atrial fibrillation see the section on rate and rhythm control in the NICE guideline on atrial fibrillation. 1.4.26 Digoxin is recommended for worsening or severe heart failure with reduced ejection fraction despite first-line treatment for heart failure. Seek specialist advice before initiating. [2010, amended 2018] 1.4.27 Routine monitoring of serum digoxin concentrations is not recommended. A digoxin concentration measured within 8 to 12 hours of the last dose may be useful to confirm a clinical impression of toxicity or non-adherence. [2003] 1.4.28 The serum digoxin concentration should be interpreted in the clinical context as toxicity may occur even when the concentration is within the 'therapeutic range'. [2003] 1.5 Treating heart failure with reduced ejection fraction in people with chronic kidney disease 1.5.1 For people who have heart failure with reduced ejection fraction and chronic kidney disease with an eGFR of 30 ml/min/1.73 m 2 or above: • offer the treatment outlined in the section on treating heart failure with Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 17 of 35 reduced ejection fraction and • if the person's eGFR is 45 ml/min/1.73 m 2 or below, consider lower doses and/ or slower titration of dose of ACE inhibitors or ARBs, MRAs and digoxin. [2018] 1.5.2 For people who have heart failure with reduced ejection fraction and chronic kidney disease with an eGFR below 30 ml/min/1.73 m 2 , the specialist heart failure MDT should consider liaising with a renal physician. [2018] 1.5.3 Monitor the response to titration of medicines closely in people who have heart failure with reduced ejection fraction and chronic kidney disease, taking into account the increased risk of hyperkalaemia. [2018] 1.6 Managing all types of heart failure When managing pharmacological treatment, follow the recommendations in the NICE guidelines on medicines adherence and medicines optimisation. Pharmacological treatment Diuretics 1.6.1 Diuretics should be routinely used for the relief of congestive symptoms and fluid retention in people with heart failure, and titrated (up and down) according to need following the initiation of subsequent heart failure therapies. [2003] 1.6.2 People who have heart failure with preserved ejection fraction should usually be offered a low to medium dose of loop diuretics (for example, less than 80 mg furosemide per day). People whose heart failure does not respond to this treatment will need further specialist advice. [2003, amended 2018] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 18 of 35 Calcium-channel blockers 1.6.3 Avoid verapamil, diltiazem and short-acting dihydropyridine agents in people who have heart failure with reduced ejection fraction. [2003, amended 2018] Amiodarone 1.6.4 Make the decision to prescribe amiodarone in consultation with a specialist. [2003] 1.6.5 Review the need to continue the amiodarone prescription at the 6-monthly clinical review. [2003, amended 2018] 1.6.6 Offer people taking amiodarone liver and thyroid function tests, and a review of side effects, as part of their routine 6-monthly clinical review. [2003, amended 2018] Anticoagulants 1.6.7 For people who have heart failure and atrial fibrillation, follow the recommendations on anticoagulation in the NICE guideline on atrial fibrillation. Be aware of the effects of impaired renal and liver function on anticoagulant therapies. [2018] 1.6.8 In people with heart failure in sinus rhythm, anticoagulation should be considered for those with a history of thromboembolism, left ventricular aneurysm or intracardiac thrombus. [2003] Vaccinations 1.6.9 Offer people with heart failure an annual vaccination against influenza. [2003] 1.6.10 Offer people with heart failure vaccination against pneumococcal disease (only required once). [2003] Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 19 of 35 Contraception and pregnancy 1.6.11 In women of childbearing potential who have heart failure, contraception and pregnancy should be discussed. If pregnancy is being considered or occurs, specialist advice should be sought. Subsequently, specialist care should be shared between the cardiologist and obstetrician. [2003] Depression See NICE's guideline on depression in adults with a chronic physical health problem. Lifestyle advice Salt and fluid restriction 1.6.12 Do not routinely advise people with heart failure to restrict their sodium or fluid consumption. Ask about salt and fluid consumption and, if needed, advise as follows: • restricting fluids for people with dilutional hyponatraemia • reducing intake for people with high levels of salt and/or fluid consumption. Continue to review the need to restrict salt or fluid. [2018] 1.6.13 Advise people with heart failure to avoid salt substitutes that contain potassium. [2018] Smoking and alcohol See NICE's guidance on smoking and tobacco and alcohol. Air travel 1.6.14 Air travel will be possible for the majority of people with heart failure, depending Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 20 of 35 on their clinical condition at the time of travel. [2003] Driving 1.6.15 Large Goods Vehicle and Passenger Carrying Vehicle licence: physicians should be up to date with the latest Driver and Vehicle Licensing Agency (DVLA) guidelines. Check the DVLA website for regular updates. [2003] 1.7 Monitoring treatment for all types of heart failure See the section on treating heart failure with reduced ejection fraction for specific recommendations on monitoring treatment for heart failure with reduced ejection fraction. Clinical review 1.7.1 All people with chronic heart failure need monitoring. This monitoring should include: • a clinical assessment of functional capacity, fluid status, cardiac rhythm (minimum of examining the pulse), cognitive status and nutritional status • a review of medication, including need for changes and possible side effects • an assessment of renal function. Note: This is a minimum. People with comorbidities or co-prescribed medications will need further monitoring. Monitoring serum potassium is particularly important if a person is taking digoxin or an MRA. [2010, amended 2018] 1.7.2 More detailed monitoring will be needed if the person has significant comorbidity or if their condition has deteriorated since the previous review. [2003] 1.7.3 The frequency of monitoring should depend on the clinical status and stability of Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 21 of 35 the person. The monitoring interval should be short (days to 2 weeks) if the clinical condition or medication has changed, but is needed at least 6-monthly for stable people with proven heart failure. [2003] 1.7.4 People with heart failure who wish to be involved in monitoring of their condition should be provided with sufficient education and support from their healthcare professional to do this, with clear guidelines as to what to do in the event of deterioration. [2003] Measuring NT-proBNP 1.7.5 Consider measuring NT-proBNP (N-terminal pro-B-type natriuretic peptide) as part of a treatment optimisation protocol only in a specialist care setting for people aged under 75 who have heart failure with reduced ejection fraction and an eGFR above 60 ml/min/1.73 m 2 . [2018] 1.8 Interventional procedures Coronary revascularisation 1.8.1 Do not routinely offer coronary revascularisation to people who have heart failure with reduced ejection fraction and coronary artery disease. [2018] Cardiac transplantation 1.8.2 Specialist referral for transplantation should be considered for people with severe refractory symptoms or refractory cardiogenic shock. [2003] Implantable cardioverter defibrillators and cardiac resynchronisation therapy See NICE's technology appraisal guidance on implantable cardioverter defibrillators and cardiac resynchronisation therapy for arrhythmias and heart failure. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 22 of 35 1.8.3 When discussing implantation of a cardioverter defibrillator: • explain the risks, benefits and consequences of cardioverter defibrillator implantation, following the principles on shared decision making in the NICE guideline on patient experience in adult NHS services • ensure the person knows that the defibrillator function can be deactivated without affecting any cardiac resynchronisation or pacing, and reactivated later • explain the circumstances in which deactivation might be offered • discuss and dispel common misconceptions about the function of the device and the consequences of deactivation • provide the person and, if they wish, their family or carers with written information covering the information discussed. [2018] 1.8.4 Review the benefits and potential harms of a cardioverter defibrillator remaining active in a person with heart failure: • at each 6-monthly review of their heart failure care • whenever their care goals change • as part of advance care planning if it is thought they are nearing the end of life. [2018] 1.9 Cardiac rehabilitation 1.9.1 Offer people with heart failure a personalised, exercise-based cardiac rehabilitation programme, unless their condition is unstable. The programme: • should be preceded by an assessment to ensure that it is suitable for the person • should be provided in a format and setting (at home, in the community or in the hospital) that is easily accessible for the person Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 23 of 35 • should include a psychological and educational component • may be incorporated within an existing cardiac rehabilitation programme • should be accompanied by information about support available from healthcare professionals when the person is doing the programme. [2018] 1.10 Palliative care 1.10.1 Do not offer long-term home oxygen therapy for advanced heart failure. Be aware that long-term home oxygen therapy may be offered for comorbidities, such as for some people with chronic obstructive pulmonary disease (see the section on oxygen in the NICE guideline on chronic obstructive pulmonary disease in over 16s). [2018] 1.10.2 Do not use prognostic risk tools to determine whether to refer a person with heart failure to palliative care services. [2018] 1.10.3 If the symptoms of a person with heart failure are worsening despite optimal specialist treatment, discuss their palliative care needs with the specialist heart failure multidisciplinary team and consider a needs assessment for palliative care. [2018] 1.10.4 People with heart failure and their families or carers should have access to professionals with palliative care skills within the heart failure team. [2003] 1.10.5 If it is thought that a person may be entering the last 2 to 3 days of life, follow the NICE guideline on care of dying adults in the last days of life. [2018] Terms used in this guideline Heart failure with preserved ejection fraction This is usually associated with impaired left ventricular relaxation, rather than left ventricular contraction, and is characterised by normal or preserved left ventricular Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 24 of 35 ejection fraction with evidence of diastolic dysfunction . Heart failure with reduced ejection fraction Heart failure with an ejection fraction below 40%. Mineralocorticoid receptor antagonist A drug that antagonises the action of aldosterone at mineralocorticoid receptors. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 25 of 35 Putting this guideline into practice NICE has produced tools and resources to help you put this guideline into practice. Putting recommendations into practice can take time. How long may vary from guideline to guideline, and depends on how much change in practice or services is needed. Implementing change is most effective when aligned with local priorities. Changes recommended for clinical practice that can be done quickly – like changes in prescribing practice – should be shared quickly. This is because healthcare professionals should use guidelines to guide their work – as is required by professional regulating bodies such as the General Medical and Nursing and Midwifery Councils. Changes should be implemented as soon as possible, unless there is a good reason for not doing so (for example, if it would be better value for money if a package of recommendations were all implemented at once). Different organisations may need different approaches to implementation, depending on their size and function. Sometimes individual practitioners may be able to respond to recommendations to improve their practice more quickly than large organisations. Here are some pointers to help organisations put NICE guidelines into practice: 1. Raise awareness through routine communication channels, such as email or newsletters, regular meetings, internal staff briefings and other communications with all relevant partner organisations. Identify things staff can include in their own practice straight away. 2. Identify a lead with an interest in the topic to champion the guideline and motivate others to support its use and make service changes, and to find out any significant issues locally. 3. Carry out a baseline assessment against the recommendations to find out whether there are gaps in current service provision. 4. Think about what data you need to measure improvement and plan how you will collect it. You may want to work with other health and social care organisations and specialist Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 26 of 35 groups to compare current practice with the recommendations. This may also help identify local issues that will slow or prevent implementation. 5. Develop an action plan, with the steps needed to put the guideline into practice, and make sure it is ready as soon as possible. Big, complex changes may take longer to implement, but some may be quick and easy to do. An action plan will help in both cases. 6. For very big changes include milestones and a business case, which will set out additional costs, savings and possible areas for disinvestment. A small project group could develop the action plan. The group might include the guideline champion, a senior organisational sponsor, staff involved in the associated services, finance and information professionals. 7. Implement the action plan with oversight from the lead and the project group. Big projects may also need project management support. 8. Review and monitor how well the guideline is being implemented through the project group. Share progress with those involved in making improvements, as well as relevant boards and local partners. NICE provides a comprehensive programme of support and resources to maximise uptake and use of evidence and guidance. See NICE's into practice pages for more information. Also see Leng G, Moore V, Abraham S, editors (2014) Achieving high quality care – practical experience from NICE. Chichester: Wiley. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 27 of 35 Recommendations for research The guideline committee has made the following key recommendations for research. The committee's full set of research recommendations is detailed in the full guideline. 1 Diuretic therapy for managing fluid overload in people with advanced heart failure in the community In people with advanced heart failure and significant peripheral fluid overload, what is the clinical and cost effectiveness of oral, subcutaneous and intravenous diuretic therapy in the community? Why this is important This research is critical to inform practice of how best to manage people with advanced heart failure in the community if they develop significant peripheral fluid overload. These people are more likely to have multiple admissions that, together with fluid overload, have a negative impact on their quality of life. Management in the community can minimise disruption for the person and reduce costs from hospital admissions. Knowledge of the most clinically and cost-effective routes of administration for diuretic therapy will dictate the level of resource needed to provide the service. Intravenous and subcutaneous diuretics usually need to be administered by nursing or healthcare staff. although a pump for self-administration of subcutaneous diuretics has recently been developed. Oral formulations can be self-administered. 2 Cardiac MRI versus other imaging techniques for diagnosing heart failure What is the optimal imaging technique for the diagnosis of heart failure? Why this is important The role of cardiac MRI in the detection and characterisation of several structural and Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 28 of 35 functional cardiac abnormalities has become well established over the past 25 years. In people with heart failure, cardiac MRI provides reliable and reproducible assessments of the left ventricular (and to a degree the right ventricular) shapes, volumes and ejection fractions. It also provides spatial assessments of the congenital and acquired structural abnormalities of the heart and their interrelationships with the remainder of the heart, as well as functional and haemodynamic assessments of these abnormalities on the heart's performance. Finally, cardiac MRI provides valuable information about the myocardial structure and metabolism, including the presence of inflammation, scarring, fibrosis and infiltration. Cardiac MRI is an expensive form of imaging, and much of this diagnostic information could be provided by less costly non-invasive imaging techniques, chiefly echocardiography. This question aims to find the most clinically and cost-effective imaging technique for the clinical diagnosis of heart failure. 3 The impact of atrial fibrillation on the natriuretic peptide threshold for diagnosing heart failure What is the optimal NT-proBNP threshold for the diagnosis of heart failure in people with atrial fibrillation? Why this is important Atrial fibrillation is a common arrhythmia in the general population, and occurs in 30 to 40% of people with heart failure. Atrial fibrillation can raise the level of serum natriuretic peptides, including NT-proBNP, even in the absence of heart failure. This is complicated further in heart failure with preserved ejection fraction, in which 2 echocardiographic diagnostic criteria become unreliable (the left atrial volume and the tissue doppler imaging assessment of diastolic function). These factors contribute to the complexity of the diagnosis and have a potential impact on the usual thresholds for NT-proBNP in people who have atrial fibrillation. This has been recognised in several ongoing randomised controlled trials of heart failure, which are using higher NT-proBNP thresholds for the diagnosis of heart failure in people with atrial fibrillation. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 29 of 35 4 The impact of advanced kidney disease on the natriuretic peptide threshold for diagnosing heart failure What are the optimal NT-proBNP thresholds for diagnosing heart failure in people with stage IIIb, IV or V chronic kidney disease? Why this is important Heart failure incidence and prevalence increase with age, with the rise starting at age 65 and peaking between 75 and 85. Both advancing age and heart failure are associated with a gradual and progressive decline in renal function. In addition, the progression of heart failure and some treatments for heart failure lead to progressive deterioration of renal function. A decline in renal function is associated with increased fluid retention and a rise in the level of the serum natriuretic peptides, including NT-proBNP, even in the absence of heart failure. There is some evidence that the use of higher NT-proBNP thresholds would improve diagnostic accuracy for heart failure in people with significant deterioration of creatinine clearance. 5 Risk tools for predicting non-sudden death in heart failure What is the most accurate prognostic risk tool in predicting 1-year mortality from heart failure at specific clinically relevant thresholds (for example, sensitivity, specificity, negative predictive value and positive predictive value at a threshold of 50% risk of mortality at 1 year)? Why this is important There are a number of validated prognostic risk tools for heart failure but most do not report sensitivity and specificity at clinically relevant thresholds. This information is crucial to enable accurate prediction of a person's risk of mortality. The ability to accurately predict a person's prognosis would allow clearer communication and timely referral to other services such as palliative care. Inaccurate prediction has the potential to lead to significant psychological harm and increased morbidity. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 30 of 35 Context Key facts and figures Heart failure is a complex clinical syndrome of symptoms and signs that suggest the efficiency of the heart as a pump is impaired. It is caused by structural or functional abnormalities of the heart. Around 920,000 people in the UK today have been diagnosed with heart failure. Both the incidence and prevalence of heart failure increase steeply with age, and the average age at diagnosis is 77. Improvements in care have increased survival for people with ischaemic heart disease, and treatments for heart failure have become more effective. But the overall prevalence of heart failure is rising because of population ageing and increasing rates of obesity. Current practice Uptake of NICE's 2010 guidance on chronic heart failure appears to be good. However, the Department of Health and Social Care's policy paper on improving cardiovascular disease outcomes: strategy noted that prescribing of ACE inhibitors, beta-blockers and aldosterone antagonists remains suboptimal, and that improved use of these drugs has the potential to reduce hospitalisations and deaths caused by heart failure. This update reviewed evidence on the clinical and cost effectiveness of these therapies. Interdisciplinary working has contributed to better outcomes in heart failure but there is further room to improve the provision of multidisciplinary teams (MDTs) and integrate them more fully into healthcare processes. This update highlights and further expands on the roles of the MDT and collaboration between the MDT and the primary care team. The Department of Health and Social Care's policy paper on improving cardiovascular disease outcomes: strategy also noted that the proportion of people with heart failure who have cardiac rehabilitation was around 4%, and that increasing this proportion would reduce mortality and hospitalisation. This update recommends that all people with heart failure are offered an easily accessible, exercise-based cardiac rehabilitation programme, if this is suitable for them. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 31 of 35 Finding more information and committee details To find out what NICE has said on related topics, including guidance in development, see the NICE topic page on cardiovascular conditions. For full details of the evidence and the guideline committee's discussions, see the full guideline. You can also find information about how the guideline was developed, including details of the committee. NICE has produced tools and resources to help you put this guideline into practice. For general help and advice on putting our guidelines into practice, see resources to help you put NICE guidance into practice. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 32 of 35 Update information September 2018: This guideline updates and replaces NICE clinical guideline 108 (published August 2010). NICE clinical guideline 108 updated and replaced NICE clinical guideline 5 (published July 2003). Recommendations are marked as [2018], [2016], [2012], [2010], [2010, amended 2018], [2003], [2003, amended 2018] or [2003, amended 2010], [2018] indicates that the evidence was reviewed and the recommendation added, updated or unchanged in 2018. [2016] refers to NICE technology appraisal guidance published in 2016. [2012] refers to NICE technology appraisal guidance published in 2012. [2010] indicates that the evidence was reviewed in 2010. [2010, amended 2018] indicates that the evidence was reviewed in 2010 but changes were made to the recommendation wording in 2018 that changed the meaning. [2003] indicates that the evidence was reviewed in 2003. [2003, amended 2018] indicates that the evidence was reviewed in 2003 but changes were made to the recommendation wording in 2018 that changed the meaning. [2003, amended 2010] indicates that the evidence was reviewed in 2003 but changes were made to the recommendation wording in 2010 that changed the meaning. • 'Heart failure due to left ventricular systolic dysfunction (LVSD)' has been replaced in all recommendations by 'heart failure with reduced ejection fraction' in line with current terminology and the 2018 guideline scope. • 'Aldosterone antagonists' has been replaced in all recommendations by 'mineralocorticoid receptor antagonists (MRAs') to clarify the function of the receptor, and in line with the 2018 guideline scope. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 33 of 35 • 'African or African-Caribbean family origin' has been added to recommendation 1.2.7 because of the high incidence of heart failure with preserved ejection fraction in these populations. Recent evidence shows that NT-proBNP levels are lower in people of west African family background and are a confounder in the diagnosis of heart failure. • Doppler 2D has been deleted from recommendations 1.2.8, 1.2.9 and 1.2.11 because all transthoracic echocardiography would have doppler 2D as a minimum and it is no longer necessary to specify this. • 'Multigated acquisition scanning' has been added to recommendation 1.2.11 to reflect current imaging technology. • Measurement of urea has been deleted from recommendations 1.2.12, 1.4.8 and 1.7.1 because the guideline committee agreed that it is not needed and is not part of renal function profiles in most centres in the UK. Blood tests for electrolytes, creatinine and eGFR have been grouped together under the term 'renal function profile' because they are provided as a unified set of analyses in the NHS. The term 'profile' is applied to a group of tests (assays). Thus these tests are more accurately described as 'profiles' as they contain multiple individual assays and have replaced thyroid function test, liver function test and lipid measurement. 'Fasting glucose' has been replaced by 'glycosylated haemoglobin (HbA1c)' in line with the NICE guidelines on diabetes. • Measurement of serum urea has been deleted from recommendation 1.4.4 because the guideline committee agreed that it is not needed and is not part of renal function profiles in most centres in the UK. Measurement of potassium has been added to ensure that monitoring is consistent across treatments. • Recommendations 1.4.6 and 1.4.10 have been added to clarify the timing of monitoring after treatment starts. • In recommendation 1.4.8, monitoring for hyperkalaemia has been replaced by potassium measurement for clarity. • Blood pressure measurement has been clarified in recommendation 1.4.13 and made consistent with other treatments. Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 34 of 35 • As a result of new evidence the treatment pathway for heart failure with reduced ejection fraction in recommendation 1.4.26 has been amended. Second line treatment has been replaced by specialist treatment. A sentence has been added to clarify that specialist advice should be sought before starting treatment with digoxin. • The first part of recommendation 1.6.2 has been removed because it is now covered in section 1.1 on team working in the management of heart failure. • Amlodipine to treat hypertension has been deleted from recommendation 1.6.3 because it has been superseded by the NICE guideline on hypertension in adults. • 'Regularly' has been replaced by 'at the 6-monthly clinical review' in recommendation 1.6.5 for clarification. • The wording in recommendation 1.6.6 has been amended in line with recommendation 1.6.5. Minor changes since publication April 2022: In section 1.4 we added links to NICE's technology appraisal guidance on dapagliflozin and empagliflozin for treating chronic heart failure with reduced ejection fraction. November 2021: We added a link to the NICE guideline on heart valve disease in recommendations 1.2.8, 1.2.15 and 1.4.2. ISBN: 978-1-4731-3093-7 Chronic heart failure in adults: diagnosis and management (NG106) © NICE 2024. All rights reserved. Subject to Notice of rights (https://www.nice.org.uk/terms-andconditions#notice-of-rights). Page 35 of 35
|
Do not refer to any outside information not found in this document to answer the prompt. Answer in a single sentence, but do not quote directly from the document. | What are the implications for AI in the medical space? | The Potential for Artificial Intelligence In Healthcare
Artificial intelligence (AI) and related technologies are increasingly
prevalent in business and society, and are beginning to be applied
to healthcare. These technologies have the potential to transform
many aspects of patient care, as well as administrative processes
within provider, payer and pharmaceutical organizations.
There are already a number of research studies suggesting that
AI can perform as well as or better than humans at key healthcare
tasks, such as diagnosing disease. Today, algorithms are already
outperforming radiologists at spotting malignant tumors, and
guiding researchers in how to construct cohorts for costly clinical
trials. However, for a variety of reasons, we believe that it will be
many years before AI replaces humans for broad medical process
domains. In this article, we describe both the potential that AI
offers to automate aspects of care and some of the barriers to
rapid implementation of AI in healthcare.
Types of AI of relevance to healthcare
Artificial intelligence is not one technology, but rather a collection
of them. Most of these technologies have immediate relevance
to the healthcare field, but the specific processes and tasks they support vary widely.
Some particular AI technologies of high
importance to healthcare are defined and described below.
Machine learning – neural networks and deep learning
Machine learning is a statistical technique for fitting models
to data and to ‘learn’ by training models with data. Machine
learning is one of the most common forms of AI; in a 2018
Deloitte survey of 1,100 US managers whose organizations
were already pursuing AI, 63% of companies surveyed were
employing machine learning in their businesses. 1
It is a broad
technique at the core of many approaches to AI and there are
many versions of it.
In healthcare, the most common application of traditional
machine learning is precision medicine – predicting what
treatment protocols are likely to succeed on a patient based on
various patient attributes and the treatment context. 2
The great
majority of machine learning and precision medicine applications
require a training dataset for which the outcome variable (eg onset
of disease) is known; this is called supervised learning.
A more complex form of machine learning is the neural
network – a technology that has been available since the 1960s
has been well established in healthcare research for several
decades 3
and has been used for categorisation applications like
determining whether a patient will acquire a particular disease.
It views problems in terms of inputs, outputs and weights of
variables or ‘features’ that associate inputs with outputs. It has
been likened to the way that neurons process signals, but the
analogy to the brain's function is relatively weak.
The most complex forms of machine learning involve deep
learning, or neural network models with many levels of features
or variables that predict outcomes. There may be thousands
of hidden features in such models, which are uncovered by the
faster processing of today's graphics processing units and cloud
architectures. A common application of deep learning in healthcare
is recognition of potentially cancerous lesions in radiology images. 4
Deep learning is increasingly being applied to radiomics, or the
detection of clinically relevant features in imaging data beyond
what can be perceived by the human eye. 5
Both radiomics and deep
learning are most commonly found in oncology-oriented image
analysis. Their combination appears to promise greater accuracy
in diagnosis than the previous generation of automated tools for
image analysis, known as computer-aided detection or CAD.
Deep learning is also increasingly used for speech recognition
and, as such, is a form of natural language processing (NLP), described below.
Unlike earlier forms of statistical analysis, each
feature in a deep learning model typically has little meaning to
a human observer. As a result, the explanation of the model's
outcomes may be very difficult or impossible to interpret.
Diagnosis and treatment applications
Diagnosis and treatment of disease has been a focus of AI since
at least the 1970s, when MYCIN was developed at Stanford for
diagnosing blood-borne bacterial infections. 8
This and other early
rule-based systems showed promise for accurately diagnosing and
treating disease, but were not adopted for clinical practice. They
were not substantially better than human diagnosticians, and they
were poorly integrated with clinician workflows and medical record
systems.
More recently, IBM's Watson has received considerable attention
in the media for its focus on precision medicine, particularly cancer
diagnosis and treatment. Watson employs a combination of
machine learning and NLP capabilities. However, early enthusiasm
for this application of the technology has faded as customers
realized the difficulty of teaching Watson how to address
particular types of cancer 9
and of integrating Watson into care
processes and systems. 10 Watson is not a single product but a set
of ‘cognitive services’ provided through application programming
interfaces (APIs), including speech and language, vision, and
machine learning-based data-analysis programs. Most observers
feel that the Watson APIs are technically capable, but taking on
cancer treatment was an overly ambitious objective. Watson and
other proprietary programs have also suffered from competition
with free ‘open source’ programs provided by some vendors, such
as Google's TensorFlow.
Implementation issues with AI bedevil many healthcare
organizations. Although rule-based systems are incorporated within
EHR systems are widely used, including at the NHS, 11 they lack the
precision of more algorithmic systems based on machine learning.
These rule-based clinical decision support systems are difficult to
maintain as medical knowledge changes and are often not able to
handle the explosion of data and knowledge based on genomic,
proteomic, metabolic and other ‘omic-based’ approaches to care.
This situation is beginning to change, but it is mostly present
in research labs and in tech firms, rather than in clinical practice.
Scarcely a week goes by without a research lab claiming that it
has developed an approach to using AI or big data to diagnose and treat a disease with equal or greater accuracy than human
clinicians. Many of these findings are based on radiological image
analysis, 12 though some involve other types of images such as
retinal scanning 13 or genomic-based precision medicine. 14 Since
these types of findings are based on statistically-based machine
learning models, they are ushering in an era of evidence- and
probability-based medicine, which is generally regarded as positive
but brings with it many challenges in medical ethics and patient/
clinician relationships. 15
Tech firms and startups are also working assiduously on the
same issues. Google, for example, is collaborating with health
delivery networks to build prediction models from big data to warn
clinicians of high-risk conditions, such as sepsis and heart failure. 16
Google, Enlitic and a variety of other startups are developing
AI-derived image interpretation algorithms. Jvion offers a ‘clinical
success machine’ that identifies the patients most at risk as well as
those most likely to respond to treatment protocols. Each of these
could provide decision support to clinicians seeking to find the best
diagnosis and treatment for patients.
There are also several firms that focus specifically on diagnosis
and treatment recommendations for certain cancers based on
their genetic profiles. Since many cancers have a genetic basis,
human clinicians have found it increasingly complex to understand
all genetic variants of cancer and their response to new drugs and
protocols. Firms like Foundation Medicine and Flatiron Health,
both now owned by Roche, specialise in this approach.
Both providers and payers for care are also using ‘population
health’ machine learning models to predict populations at risk
of particular diseases 17 or accidents 18 or to predict hospital
readmission. 19 These models can be effective at prediction,
although they sometimes lack all the relevant data that might add
predictive capability, such as patient socio-economic status.
But whether rules-based or algorithmic in nature, AI-based
diagnosis and treatment recommendations are sometimes
challenging to embed in clinical workflows and EHR systems. Such
integration issues have probably been a greater barrier to broad
implementation of AI than any inability to provide accurate and
effective recommendations; and many AI-based capabilities
for diagnosis and treatment from tech firms are standalone in
nature or address only a single aspect of care. Some EHR vendors
have begun to embed limited AI functions (beyond rule-based
clinical decision support) into their offerings, 20 but these are in the
early stages. Providers will either have to undertake substantial
integration projects themselves or wait until EHR vendors add
more AI capabilities.
| [Task Instructions]
==================
Do not refer to any outside information not found in this document to answer the prompt. Answer in a single sentence, but do not quote directly from the document.
[Query]
==================
What are the implications for AI in the medical space?
[Text]
==================
The Potential for Artificial Intelligence In Healthcare
Artificial intelligence (AI) and related technologies are increasingly
prevalent in business and society, and are beginning to be applied
to healthcare. These technologies have the potential to transform
many aspects of patient care, as well as administrative processes
within provider, payer and pharmaceutical organizations.
There are already a number of research studies suggesting that
AI can perform as well as or better than humans at key healthcare
tasks, such as diagnosing disease. Today, algorithms are already
outperforming radiologists at spotting malignant tumors, and
guiding researchers in how to construct cohorts for costly clinical
trials. However, for a variety of reasons, we believe that it will be
many years before AI replaces humans for broad medical process
domains. In this article, we describe both the potential that AI
offers to automate aspects of care and some of the barriers to
rapid implementation of AI in healthcare.
Types of AI of relevance to healthcare
Artificial intelligence is not one technology, but rather a collection
of them. Most of these technologies have immediate relevance
to the healthcare field, but the specific processes and tasks they support vary widely.
Some particular AI technologies of high
importance to healthcare are defined and described below.
Machine learning – neural networks and deep learning
Machine learning is a statistical technique for fitting models
to data and to ‘learn’ by training models with data. Machine
learning is one of the most common forms of AI; in a 2018
Deloitte survey of 1,100 US managers whose organizations
were already pursuing AI, 63% of companies surveyed were
employing machine learning in their businesses. 1
It is a broad
technique at the core of many approaches to AI and there are
many versions of it.
In healthcare, the most common application of traditional
machine learning is precision medicine – predicting what
treatment protocols are likely to succeed on a patient based on
various patient attributes and the treatment context. 2
The great
majority of machine learning and precision medicine applications
require a training dataset for which the outcome variable (eg onset
of disease) is known; this is called supervised learning.
A more complex form of machine learning is the neural
network – a technology that has been available since the 1960s
has been well established in healthcare research for several
decades 3
and has been used for categorisation applications like
determining whether a patient will acquire a particular disease.
It views problems in terms of inputs, outputs and weights of
variables or ‘features’ that associate inputs with outputs. It has
been likened to the way that neurons process signals, but the
analogy to the brain's function is relatively weak.
The most complex forms of machine learning involve deep
learning, or neural network models with many levels of features
or variables that predict outcomes. There may be thousands
of hidden features in such models, which are uncovered by the
faster processing of today's graphics processing units and cloud
architectures. A common application of deep learning in healthcare
is recognition of potentially cancerous lesions in radiology images. 4
Deep learning is increasingly being applied to radiomics, or the
detection of clinically relevant features in imaging data beyond
what can be perceived by the human eye. 5
Both radiomics and deep
learning are most commonly found in oncology-oriented image
analysis. Their combination appears to promise greater accuracy
in diagnosis than the previous generation of automated tools for
image analysis, known as computer-aided detection or CAD.
Deep learning is also increasingly used for speech recognition
and, as such, is a form of natural language processing (NLP), described below.
Unlike earlier forms of statistical analysis, each
feature in a deep learning model typically has little meaning to
a human observer. As a result, the explanation of the model's
outcomes may be very difficult or impossible to interpret.
Diagnosis and treatment applications
Diagnosis and treatment of disease has been a focus of AI since
at least the 1970s, when MYCIN was developed at Stanford for
diagnosing blood-borne bacterial infections. 8
This and other early
rule-based systems showed promise for accurately diagnosing and
treating disease, but were not adopted for clinical practice. They
were not substantially better than human diagnosticians, and they
were poorly integrated with clinician workflows and medical record
systems.
More recently, IBM's Watson has received considerable attention
in the media for its focus on precision medicine, particularly cancer
diagnosis and treatment. Watson employs a combination of
machine learning and NLP capabilities. However, early enthusiasm
for this application of the technology has faded as customers
realized the difficulty of teaching Watson how to address
particular types of cancer 9
and of integrating Watson into care
processes and systems. 10 Watson is not a single product but a set
of ‘cognitive services’ provided through application programming
interfaces (APIs), including speech and language, vision, and
machine learning-based data-analysis programs. Most observers
feel that the Watson APIs are technically capable, but taking on
cancer treatment was an overly ambitious objective. Watson and
other proprietary programs have also suffered from competition
with free ‘open source’ programs provided by some vendors, such
as Google's TensorFlow.
Implementation issues with AI bedevil many healthcare
organizations. Although rule-based systems are incorporated within
EHR systems are widely used, including at the NHS, 11 they lack the
precision of more algorithmic systems based on machine learning.
These rule-based clinical decision support systems are difficult to
maintain as medical knowledge changes and are often not able to
handle the explosion of data and knowledge based on genomic,
proteomic, metabolic and other ‘omic-based’ approaches to care.
This situation is beginning to change, but it is mostly present
in research labs and in tech firms, rather than in clinical practice.
Scarcely a week goes by without a research lab claiming that it
has developed an approach to using AI or big data to diagnose and treat a disease with equal or greater accuracy than human
clinicians. Many of these findings are based on radiological image
analysis, 12 though some involve other types of images such as
retinal scanning 13 or genomic-based precision medicine. 14 Since
these types of findings are based on statistically-based machine
learning models, they are ushering in an era of evidence- and
probability-based medicine, which is generally regarded as positive
but brings with it many challenges in medical ethics and patient/
clinician relationships. 15
Tech firms and startups are also working assiduously on the
same issues. Google, for example, is collaborating with health
delivery networks to build prediction models from big data to warn
clinicians of high-risk conditions, such as sepsis and heart failure. 16
Google, Enlitic and a variety of other startups are developing
AI-derived image interpretation algorithms. Jvion offers a ‘clinical
success machine’ that identifies the patients most at risk as well as
those most likely to respond to treatment protocols. Each of these
could provide decision support to clinicians seeking to find the best
diagnosis and treatment for patients.
There are also several firms that focus specifically on diagnosis
and treatment recommendations for certain cancers based on
their genetic profiles. Since many cancers have a genetic basis,
human clinicians have found it increasingly complex to understand
all genetic variants of cancer and their response to new drugs and
protocols. Firms like Foundation Medicine and Flatiron Health,
both now owned by Roche, specialise in this approach.
Both providers and payers for care are also using ‘population
health’ machine learning models to predict populations at risk
of particular diseases 17 or accidents 18 or to predict hospital
readmission. 19 These models can be effective at prediction,
although they sometimes lack all the relevant data that might add
predictive capability, such as patient socio-economic status.
But whether rules-based or algorithmic in nature, AI-based
diagnosis and treatment recommendations are sometimes
challenging to embed in clinical workflows and EHR systems. Such
integration issues have probably been a greater barrier to broad
implementation of AI than any inability to provide accurate and
effective recommendations; and many AI-based capabilities
for diagnosis and treatment from tech firms are standalone in
nature or address only a single aspect of care. Some EHR vendors
have begun to embed limited AI functions (beyond rule-based
clinical decision support) into their offerings, 20 but these are in the
early stages. Providers will either have to undertake substantial
integration projects themselves or wait until EHR vendors add
more AI capabilities.
|
Answer all user questions using only information from the prompt provided by the user. Do not use any outside sources, or any information stored in your databases. | Which wars have been financed with estate taxes? | History
Early History of U.S. Taxes on Transfers
Taxes on the transfer of assets have existed throughout history, dating back to ancient Egypt. In
the United States, they were used prior to the modern estate and gift tax in 1916 to finance wars
and similar emergencies.8 The first was enacted in 1797 to expand the Navy, given strained
relationships with France. At that time, a documentary stamp tax on the inventories of deceased
persons, the receipt of inheritances from an estate (except those to a wife, children, or
grandchildren), and the probates and letters of administration of estates was imposed. These taxes
were fixed amounts, although they were larger for larger inheritances and small inheritances were
exempt. These taxes were repealed in 1802.
In 1862, during the Civil War, an inheritance tax was imposed. Unlike the current estate tax, the
tax was imposed on the beneficiaries, but unlike the stamp tax, it was a percentage of the
inheritance. The tax was also imposed on gifts during the lifetime. The rate depended on the
family relationships of the beneficiaries, and spouses and small inheritances were exempt. This
tax was repealed in 1870.
8For a history, see Darien P. Jacobson, Brian G. Raub, and Barry W. Johnson, The Estate Tax: Ninety Years and
Counting, Internal Revenue Service, Statistics of Income Bulletin, Summer 2007, pp. 118-128, https://www.irs.gov/
pub/irs-soi/ninetyestate.pdf, and Joint Committee on Taxation, History, Present Law, And Analysis Of The Federal
Wealth Transfer Tax System, JCX-52-15, March 26, 2015, https://www.jct.gov/publications/2015/jcx-52-15/. For a
history of the gift tax, see David Joulfanian, The Federal Gift Tax: History, Law, and Economics, U.S. Department of
the Treasury, Office of Tax Analysis, OTA Paper 100, November 2007, https://home.treasury.gov/system/files/131/wp
100.pdf.
Congressional Research Service
4
The Estate and Gift Tax: An Overview
The 1894 income tax was not a transfer tax, but it included inheritances and gifts in income. It
was short-lived after being found unconstitutional by the Supreme Court in Pollock v. Farmers’
Loan and Trust Company.
In 1898, an estate tax was enacted to finance the Spanish-American War. Rates were graduated
depending on degree of kinship and size, bequests to spouses were exempt, and there was an
overall exemption that excluded small estates. It was repealed in 1902.
The Modern Estate and Gift Tax
Lawmakers enacted the direct ancestor of the current estate tax in 1916. It contained exemptions
that excluded small estates, and rates were graduated based on the size of the estate. Over time,
rates were increased, but the basic form of the tax remained. The top rate was 10% in 1916 with a
$50,000 exemption, and it was increased to 25% in 1917, with the first $50,000 taxed at 2%. At
the end of World War I in 1918, rates were reduced on smaller estates and charitable deductions
were allowed. The top rate was increased to 40% in 1924, and a credit for state taxes was allowed
for up to 25% of estate tax liability. The top rate was reduced to 20% from 1926 to 1931,
increased to 40% in 1932, and eventually rose as high as 77% from 1941 to 1976.
A separate gift tax was enacted in 1924 with the same rates and exemptions, and an annual
exclusion per donee of $500. The tax was repealed in 1926, then reenacted in 1932 with a $5,000
annual exclusion per donee.
In 1942, changes addressed the difference in treatment in community property states, where each
spouse owned half the assets and only the half owned by the decedent was subject to tax. In other
states where couples could own assets jointly, exclusions were allowed only if the surviving
spouse contributed to the assets. The 1942 act treated assets in community property states the
same as in other states. In 1948, this rule was changed to allow a deduction for property
transferred to a spouse whether by the will or by law. The 1942 act made other changes in rates
and exemptions and instituted a $3,000 annual gift exclusion per donee.
The Tax Reform Act of 1976 (P.L. 94-455) created the modern unified estate and gift tax with a
unified credit and graduated rates applied to all transfers. The 1976 act also instituted carryover
basis for inherited assets, but that provision resulted in considerable controversy and was repealed
retroactively in 1980. The exemption was increased from $60,000 to $120,000, and the top rate
was lowered to 70%.
The Economic Growth and Tax Relief Act of 2001 (EGTRRA; P.L. 107-16) provided for a
gradual reduction in the estate tax. The law applied a unified exemption for both lifetime gifts and
the estate of $675,000 prior to these changes.
Under EGTRRA, the estate tax exemption rose from $675,000 in 2001 to $3.5 million in 2009,
and the top tax rate fell from 55% to 45%. Although combined estate and gift tax rates are
graduated, the exemption is effectively in the form of a credit that eliminates tax due at lower
rates, resulting in a flat rate on taxable assets under 2009 law. The gift tax exemption was,
however, restricted to $1 million.
For 2010, EGTRRA scheduled the elimination of the estate tax, although it retained the gift tax
and its $1 million exemption. EGTRRA also provided for a carryover of basis for assets inherited
at death in 2010, so that, in contrast with prior law, heirs who sold assets would have to pay tax
on gains accrued during the decedent’s lifetime. This provision had a $1.3 million exemption for
gain (plus $3 million for a spouse).
Congressional Research Service
5
The Estate and Gift Tax: An Overview
As with other provisions of EGTRRA, the estate tax revisions were to expire in 2011, returning
the tax provisions to their pre-EGTRRA levels. The exemption would have reverted to $1 million
(a value that had already been scheduled for pre-EGTRRA law) and the rate to 55% (with some
graduated rates). The carryover basis provision effective in 2010 would have been eliminated (so
that heirs would not be taxed on gain accumulated during the decedent’s life when they inherited
assets).
During debate on the estate tax, most agreed that the 2010 provisions would not be continued and,
indeed, could be repealed retroactively. President Obama proposed a permanent extension of the
2009 rules (a $3.5 million exemption and a 45% tax rate), and the House provided for that
permanent extension on December 3, 2009 (H.R. 4154). The Senate Democratic leadership
indicated a plan to retroactively reinstate the 2009 rules for 2010 and beyond. Senate Minority
Leader McConnell proposed an alternative of a 35% tax rate and a $5 million exemption.9 A
similar proposal for a $5 million exemption and a 35% rate, which also included the ability of the
surviving spouse to inherit any unused exemption of the decedent, is often referred to as Lincoln
Kyl (named after two Senators who sponsored it). Other proposals began with the $3.5 million
exemption and 45% rate and would have phased in the $5 million exemption and 55% rate. Some
Members of Congress argued for permanent estate tax repeal.10
At the end of 2010, P.L. 111-312 enacted a temporary two-year extension of the estate and gift
tax, with a $5 million unified exemption, a 35% rate, and inheritance of unused spousal
exemptions. For 2010, estates could elect to be taxed under the estate tax or under the carryover
rules. These provisions provided for estate tax rules through 2012, after which the provisions
would have reverted to the pre-EGTRRA rules ($1 million exemption, 55% top rate) absent
legislation.
The American Taxpayer Relief Act of 2012 (P.L. 112-240) established the permanent exemption
($5.25 million, indexed for inflation?) and rate (40%) described above.
The 2017 tax revision (P.L. 115-97) doubled the exemption for the years 2018 through 2025. The
House had proposed doubling the exemption through 2024 and then repealing the estate tax and
lowering the gift tax rates to 35%.
One issue that arises with the expiration of the increased exemptions is the treatment of gifts that
had been transferred with exemptions higher than the exemptions that the law would revert to. | Answer all user questions using only information from the prompt provided by the user. Do not use any outside sources, or any information stored in your databases.
Which wars have been financed with estate taxes?
History
Early History of U.S. Taxes on Transfers
Taxes on the transfer of assets have existed throughout history, dating back to ancient Egypt. In
the United States, they were used prior to the modern estate and gift tax in 1916 to finance wars
and similar emergencies.8 The first was enacted in 1797 to expand the Navy, given strained
relationships with France. At that time, a documentary stamp tax on the inventories of deceased
persons, the receipt of inheritances from an estate (except those to a wife, children, or
grandchildren), and the probates and letters of administration of estates was imposed. These taxes
were fixed amounts, although they were larger for larger inheritances and small inheritances were
exempt. These taxes were repealed in 1802.
In 1862, during the Civil War, an inheritance tax was imposed. Unlike the current estate tax, the
tax was imposed on the beneficiaries, but unlike the stamp tax, it was a percentage of the
inheritance. The tax was also imposed on gifts during the lifetime. The rate depended on the
family relationships of the beneficiaries, and spouses and small inheritances were exempt. This
tax was repealed in 1870.
8For a history, see Darien P. Jacobson, Brian G. Raub, and Barry W. Johnson, The Estate Tax: Ninety Years and
Counting, Internal Revenue Service, Statistics of Income Bulletin, Summer 2007, pp. 118-128, https://www.irs.gov/
pub/irs-soi/ninetyestate.pdf, and Joint Committee on Taxation, History, Present Law, And Analysis Of The Federal
Wealth Transfer Tax System, JCX-52-15, March 26, 2015, https://www.jct.gov/publications/2015/jcx-52-15/. For a
history of the gift tax, see David Joulfanian, The Federal Gift Tax: History, Law, and Economics, U.S. Department of
the Treasury, Office of Tax Analysis, OTA Paper 100, November 2007, https://home.treasury.gov/system/files/131/wp
100.pdf.
Congressional Research Service
4
The Estate and Gift Tax: An Overview
The 1894 income tax was not a transfer tax, but it included inheritances and gifts in income. It
was short-lived after being found unconstitutional by the Supreme Court in Pollock v. Farmers’
Loan and Trust Company.
In 1898, an estate tax was enacted to finance the Spanish-American War. Rates were graduated
depending on degree of kinship and size, bequests to spouses were exempt, and there was an
overall exemption that excluded small estates. It was repealed in 1902.
The Modern Estate and Gift Tax
Lawmakers enacted the direct ancestor of the current estate tax in 1916. It contained exemptions
that excluded small estates, and rates were graduated based on the size of the estate. Over time,
rates were increased, but the basic form of the tax remained. The top rate was 10% in 1916 with a
$50,000 exemption, and it was increased to 25% in 1917, with the first $50,000 taxed at 2%. At
the end of World War I in 1918, rates were reduced on smaller estates and charitable deductions
were allowed. The top rate was increased to 40% in 1924, and a credit for state taxes was allowed
for up to 25% of estate tax liability. The top rate was reduced to 20% from 1926 to 1931,
increased to 40% in 1932, and eventually rose as high as 77% from 1941 to 1976.
A separate gift tax was enacted in 1924 with the same rates and exemptions, and an annual
exclusion per donee of $500. The tax was repealed in 1926, then reenacted in 1932 with a $5,000
annual exclusion per donee.
In 1942, changes addressed the difference in treatment in community property states, where each
spouse owned half the assets and only the half owned by the decedent was subject to tax. In other
states where couples could own assets jointly, exclusions were allowed only if the surviving
spouse contributed to the assets. The 1942 act treated assets in community property states the
same as in other states. In 1948, this rule was changed to allow a deduction for property
transferred to a spouse whether by the will or by law. The 1942 act made other changes in rates
and exemptions and instituted a $3,000 annual gift exclusion per donee.
The Tax Reform Act of 1976 (P.L. 94-455) created the modern unified estate and gift tax with a
unified credit and graduated rates applied to all transfers. The 1976 act also instituted carryover
basis for inherited assets, but that provision resulted in considerable controversy and was repealed
retroactively in 1980. The exemption was increased from $60,000 to $120,000, and the top rate
was lowered to 70%.
The Economic Growth and Tax Relief Act of 2001 (EGTRRA; P.L. 107-16) provided for a
gradual reduction in the estate tax. The law applied a unified exemption for both lifetime gifts and
the estate of $675,000 prior to these changes.
Under EGTRRA, the estate tax exemption rose from $675,000 in 2001 to $3.5 million in 2009,
and the top tax rate fell from 55% to 45%. Although combined estate and gift tax rates are
graduated, the exemption is effectively in the form of a credit that eliminates tax due at lower
rates, resulting in a flat rate on taxable assets under 2009 law. The gift tax exemption was,
however, restricted to $1 million.
For 2010, EGTRRA scheduled the elimination of the estate tax, although it retained the gift tax
and its $1 million exemption. EGTRRA also provided for a carryover of basis for assets inherited
at death in 2010, so that, in contrast with prior law, heirs who sold assets would have to pay tax
on gains accrued during the decedent’s lifetime. This provision had a $1.3 million exemption for
gain (plus $3 million for a spouse).
Congressional Research Service
5
The Estate and Gift Tax: An Overview
As with other provisions of EGTRRA, the estate tax revisions were to expire in 2011, returning
the tax provisions to their pre-EGTRRA levels. The exemption would have reverted to $1 million
(a value that had already been scheduled for pre-EGTRRA law) and the rate to 55% (with some
graduated rates). The carryover basis provision effective in 2010 would have been eliminated (so
that heirs would not be taxed on gain accumulated during the decedent’s life when they inherited
assets).
During debate on the estate tax, most agreed that the 2010 provisions would not be continued and,
indeed, could be repealed retroactively. President Obama proposed a permanent extension of the
2009 rules (a $3.5 million exemption and a 45% tax rate), and the House provided for that
permanent extension on December 3, 2009 (H.R. 4154). The Senate Democratic leadership
indicated a plan to retroactively reinstate the 2009 rules for 2010 and beyond. Senate Minority
Leader McConnell proposed an alternative of a 35% tax rate and a $5 million exemption.9 A
similar proposal for a $5 million exemption and a 35% rate, which also included the ability of the
surviving spouse to inherit any unused exemption of the decedent, is often referred to as Lincoln
Kyl (named after two Senators who sponsored it). Other proposals began with the $3.5 million
exemption and 45% rate and would have phased in the $5 million exemption and 55% rate. Some
Members of Congress argued for permanent estate tax repeal.10
At the end of 2010, P.L. 111-312 enacted a temporary two-year extension of the estate and gift
tax, with a $5 million unified exemption, a 35% rate, and inheritance of unused spousal
exemptions. For 2010, estates could elect to be taxed under the estate tax or under the carryover
rules. These provisions provided for estate tax rules through 2012, after which the provisions
would have reverted to the pre-EGTRRA rules ($1 million exemption, 55% top rate) absent
legislation.
The American Taxpayer Relief Act of 2012 (P.L. 112-240) established the permanent exemption
($5.25 million, indexed for inflation?) and rate (40%) described above.
The 2017 tax revision (P.L. 115-97) doubled the exemption for the years 2018 through 2025. The
House had proposed doubling the exemption through 2024 and then repealing the estate tax and
lowering the gift tax rates to 35%.
One issue that arises with the expiration of the increased exemptions is the treatment of gifts that
had been transferred with exemptions higher than the exemptions that the law would revert to. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | Could you explain two key aspects of the U.S. legal system - first, by explaining the different functions of federal and state courts, and then explaining the differences between a civil and criminal case? Answer in a minimum of 300 words. | The legal system in the United States is not composed of a single set of laws and courts. Instead, an inter-connected web of laws, regulations, and courts stretches across the country at the state and federal levels. Courts interpret state and federal law to refine the rules through case law.
The federal and state court systems have a hierarchy that allows higher courts to review decisions made by lower courts. Also, both systems have civil and criminal courts.
FindLaw's Legal System section provides a closer look at the U.S. court system, including the federal court system and state court cases. This section also includes information about civil courts, including articles about class action lawsuits, tips on negotiating a settlement, the judge's role in court, and a helpful glossary with terms related to courts and the law.
The United States Judiciary: Federal and State Courts
The United States has two court systems: federal and state courts. Each court system hears criminal and civil cases. This section describes the differences and similarities between the federal and state court systems.
Federal Courts Explained
Article III of the U.S. Constitution allows Congress to create a federal court system. The federal judiciary has 13 courts of appeals, 94 U.S. district courts, a court of federal claims, and a court of international trade. The United States Supreme Court, the final arbiter of any case, is also a federal court.
If you file a case in the federal system, your case begins at the district court level with a federal judge. If you appeal your case, a federal circuit court will hear the appeal. If you appeal the circuit court's decision, you must petition the U.S. Supreme Court to hear your case.
The Supreme Court grants reviews (certiorari) of about 100 cases a year and is not obligated to hear your case. You can learn more about Supreme Court justices from FindLaw's Supreme Court Center.
Federal District Courts have jurisdiction (i.e., the ability to hear a case) over the following types of cases:
Cases where the U.S. government is a party to the lawsuit
Disputes that raise constitutional questions or involve federal law (i.e., federal question jurisdiction)
Controversies between the U.S. government and a state or foreign entity
Sometimes, a plaintiff (the person or entity filing a lawsuit) may file a civil case in federal or state court. Two ways this happens is through diversity jurisdiction and federal question jurisdiction. Whether the plaintiff has this option depends on the circumstances of their case.
A plaintiff who claims a person or entity violated their constitutional rights or broke a federal law may invoke federal question jurisdiction. If they can show that their case involved either a constitutional violation or that it arose under federal law, the federal district court may hear it.
For a federal district court to have diversity jurisdiction, the plaintiff must show that they and the defendant(s) in a case live in different states and the amount in controversy exceeds $75,000.
Read FindLaw's article on federal courts for more information.
State Courts Explained
The U.S. Constitution and state law establish each state's court system. Because of this, not every state has the same type of court system. Check your state's laws for more specific information about your state's courts.
Because each state's court system is different, no absolute structure applies to the states. But generally, a plaintiff files their civil case in a state court, sometimes known as a trial court or state district court. If they appeal the trial court's decision, most states have an appellate court system that hears the appeal. If a party appeals from the appellate court, most states have a supreme court to review the case (although, again, the name of the highest court in your state may differ).
If the highest court in your state issues a decision, you may generally petition the U.S. Supreme Court to consider your case. But, just like a federal circuit court appeal, the U.S. Supreme Court is not obligated to hear your case.
Generally, state courts have jurisdiction over criminal and civil cases that involve that state's laws. For example, a Wisconsin state court could hear a civil case that invokes Wisconsin state law. The Wisconsin state court generally could not hear a civil case where the cause of action arose in Florida and broke Florida's state laws, as the Wisconsin court generally would not have jurisdiction over the subject matter of the case nor personal jurisdiction over the parties to the case.
Read FindLaw's article on state courts for more in-depth information.
Civil vs. Criminal Cases
Deciding whether to file your case in a federal or state court is important. Another defining factor of your case is whether it is civil or criminal.
Generally, civil cases involve claims between private parties. For example, if you borrowed tools from your neighbor and refused to return them, the neighbor could file a civil case against you in state court. Or, if you believe the school board at a local public school infringed your First Amendment right to free speech, you could file a civil claim against them in federal court.
Criminal cases involve alleged offenses against society. Instead of a dispute between private parties, criminal cases involve the government bringing criminal charges against someone and prosecuting them.
The following section describes the differences between civil and criminal cases.
Civil Cases
When a person, organization, or entity (such as a corporation) claims that another person, organization, or entity breached a legal duty owed to the plaintiff, they have a potential civil case. Common claims in civil lawsuits include the following:
Intentional torts (e.g., infliction of emotional distress or assault)
Negligence
Nuisance
Personal injury
Breach of contract
Property damage
Child custody, child support, and other family law disputes
Whether to file a civil lawsuit in federal or state court depends on the circumstances of your case. But, most plaintiffs file their civil lawsuits in state courts. Filing a civil case in federal court is only appropriate in certain circumstances.
In civil litigation, the plaintiff has the burden of proof at trial. They must prove their case by a preponderance of the evidence. This standard means that the plaintiff must prove to the trier of fact (judge or jury) that it is more likely than not that the defendant is liable for the plaintiff's claimed relief or damages.
Browse FindLaw's article on the basics of civil court for more information.
Criminal Cases
Unlike civil cases, where the injured party files a lawsuit, criminal cases involve the government bringing charges against the accused person.
Most crimes in the United States involve violations of state laws rather than federal laws. So, state courts hear most criminal cases. In a state criminal case, district attorneys prosecute the defendant.
But, suppose the government charges the defendant with a federal crime. In that case, a United States Attorney will prosecute the case in federal court. The prosecution has the burden of proof in a criminal case. They must prove the defendant's guilt beyond a reasonable doubt.
Not every criminal case involves actual crime victims. For example, the government can prosecute someone for driving under the influence even if they did not injure anyone or cause property damage. | [question]
Could you explain two key aspects of the U.S. legal system - first, by explaining the different functions of federal and state courts, and then explaining the differences between a civil and criminal case? Answer in a minimum of 300 words.
=====================
[text]
The legal system in the United States is not composed of a single set of laws and courts. Instead, an inter-connected web of laws, regulations, and courts stretches across the country at the state and federal levels. Courts interpret state and federal law to refine the rules through case law.
The federal and state court systems have a hierarchy that allows higher courts to review decisions made by lower courts. Also, both systems have civil and criminal courts.
FindLaw's Legal System section provides a closer look at the U.S. court system, including the federal court system and state court cases. This section also includes information about civil courts, including articles about class action lawsuits, tips on negotiating a settlement, the judge's role in court, and a helpful glossary with terms related to courts and the law.
The United States Judiciary: Federal and State Courts
The United States has two court systems: federal and state courts. Each court system hears criminal and civil cases. This section describes the differences and similarities between the federal and state court systems.
Federal Courts Explained
Article III of the U.S. Constitution allows Congress to create a federal court system. The federal judiciary has 13 courts of appeals, 94 U.S. district courts, a court of federal claims, and a court of international trade. The United States Supreme Court, the final arbiter of any case, is also a federal court.
If you file a case in the federal system, your case begins at the district court level with a federal judge. If you appeal your case, a federal circuit court will hear the appeal. If you appeal the circuit court's decision, you must petition the U.S. Supreme Court to hear your case.
The Supreme Court grants reviews (certiorari) of about 100 cases a year and is not obligated to hear your case. You can learn more about Supreme Court justices from FindLaw's Supreme Court Center.
Federal District Courts have jurisdiction (i.e., the ability to hear a case) over the following types of cases:
Cases where the U.S. government is a party to the lawsuit
Disputes that raise constitutional questions or involve federal law (i.e., federal question jurisdiction)
Controversies between the U.S. government and a state or foreign entity
Sometimes, a plaintiff (the person or entity filing a lawsuit) may file a civil case in federal or state court. Two ways this happens is through diversity jurisdiction and federal question jurisdiction. Whether the plaintiff has this option depends on the circumstances of their case.
A plaintiff who claims a person or entity violated their constitutional rights or broke a federal law may invoke federal question jurisdiction. If they can show that their case involved either a constitutional violation or that it arose under federal law, the federal district court may hear it.
For a federal district court to have diversity jurisdiction, the plaintiff must show that they and the defendant(s) in a case live in different states and the amount in controversy exceeds $75,000.
Read FindLaw's article on federal courts for more information.
State Courts Explained
The U.S. Constitution and state law establish each state's court system. Because of this, not every state has the same type of court system. Check your state's laws for more specific information about your state's courts.
Because each state's court system is different, no absolute structure applies to the states. But generally, a plaintiff files their civil case in a state court, sometimes known as a trial court or state district court. If they appeal the trial court's decision, most states have an appellate court system that hears the appeal. If a party appeals from the appellate court, most states have a supreme court to review the case (although, again, the name of the highest court in your state may differ).
If the highest court in your state issues a decision, you may generally petition the U.S. Supreme Court to consider your case. But, just like a federal circuit court appeal, the U.S. Supreme Court is not obligated to hear your case.
Generally, state courts have jurisdiction over criminal and civil cases that involve that state's laws. For example, a Wisconsin state court could hear a civil case that invokes Wisconsin state law. The Wisconsin state court generally could not hear a civil case where the cause of action arose in Florida and broke Florida's state laws, as the Wisconsin court generally would not have jurisdiction over the subject matter of the case nor personal jurisdiction over the parties to the case.
Read FindLaw's article on state courts for more in-depth information.
Civil vs. Criminal Cases
Deciding whether to file your case in a federal or state court is important. Another defining factor of your case is whether it is civil or criminal.
Generally, civil cases involve claims between private parties. For example, if you borrowed tools from your neighbor and refused to return them, the neighbor could file a civil case against you in state court. Or, if you believe the school board at a local public school infringed your First Amendment right to free speech, you could file a civil claim against them in federal court.
Criminal cases involve alleged offenses against society. Instead of a dispute between private parties, criminal cases involve the government bringing criminal charges against someone and prosecuting them.
The following section describes the differences between civil and criminal cases.
Civil Cases
When a person, organization, or entity (such as a corporation) claims that another person, organization, or entity breached a legal duty owed to the plaintiff, they have a potential civil case. Common claims in civil lawsuits include the following:
Intentional torts (e.g., infliction of emotional distress or assault)
Negligence
Nuisance
Personal injury
Breach of contract
Property damage
Child custody, child support, and other family law disputes
Whether to file a civil lawsuit in federal or state court depends on the circumstances of your case. But, most plaintiffs file their civil lawsuits in state courts. Filing a civil case in federal court is only appropriate in certain circumstances.
In civil litigation, the plaintiff has the burden of proof at trial. They must prove their case by a preponderance of the evidence. This standard means that the plaintiff must prove to the trier of fact (judge or jury) that it is more likely than not that the defendant is liable for the plaintiff's claimed relief or damages.
Browse FindLaw's article on the basics of civil court for more information.
Criminal Cases
Unlike civil cases, where the injured party files a lawsuit, criminal cases involve the government bringing charges against the accused person.
Most crimes in the United States involve violations of state laws rather than federal laws. So, state courts hear most criminal cases. In a state criminal case, district attorneys prosecute the defendant.
But, suppose the government charges the defendant with a federal crime. In that case, a United States Attorney will prosecute the case in federal court. The prosecution has the burden of proof in a criminal case. They must prove the defendant's guilt beyond a reasonable doubt.
Not every criminal case involves actual crime victims. For example, the government can prosecute someone for driving under the influence even if they did not injure anyone or cause property damage.
https://www.findlaw.com/litigation/legal-system/introduction-to-the-u-s-legal-system.html
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | Why is there such a big difference between the highest and lowest paid rns? surely experience cant make that much of a difference in pay? | RN salaries increased for most license types, but not by a generous amount, according to the report. The median RN salary reported by survey respondents was $80,000, an increase of $2,000 from the 2022 survey. The median salary for APRNs/ARNPs was $117,300, which is a decrease of $2,700 (about 2%) from the 2022 report. This could be due to the younger average age of respondents in this group of nurses.
The report also revealed that the gender pay gap for RNs has narrowed but hasn’t disappeared. The median salary for a male RN is $6,000 higher than the median salary for a female RN (compared with a $14,000 gap in the 2022 survey). Nurses’ responses helped identify some possible explanations for this salary gap, such as the higher percentage of male RNs working night shifts and negotiating their salary. However, the gap in male-female negotiating tendencies is closing, as more female RNs are becoming proactive in asking for higher pay.
“These findings surrounding salary negotiation are encouraging,” said Felicia Sadler, MJ, BSN, RN, CPHQ, LSSBB, Vice President of Quality and Partner at Relias, in the report. “But it’s important that organizations commit to structures and processes that ensure continuous process improvements. Despite the shrinking pay gap, ongoing organizational salary reviews and advocacy and awareness campaigns are needed to close the gap and keep it closed.”
Our findings also showed that education can positively impact nurse salaries. Across license types, 40% of nurses who earned certification said it resulted in a salary increase.
Workplace safety and wellness
For the first time, our survey asked nurses about their experiences with workplace violence and how their jobs affect their mental health and wellness, which are crucial factors for job satisfaction and retention. Unfortunately, many nurses said they have either witnessed or directly experienced workplace violence, which can have detrimental effects on their physical and mental health.
About 22% of nurses said their organization has either weekly or monthly instances of workplace violence, according to our survey. And that’s not all.
Almost one-third (31%) of nurses had been subjected to verbal abuse by a colleague.
64% had been subjected to verbal abuse by a patient or a patient’s family member.
23% had been physically assaulted or abused by a patient or a patient’s family member.
In addition, nurses across all licensures and age groups said the profession has affected their mental health and wellness. Nurses ages 18 to 34 were more likely to report experiencing burnout, ethical dilemmas and moral injury, and compassion fatigue than nurses from other age groups.
Wellness resources also remain important to nurses. Based on data from our report, the top three wellness resources nurses wanted were:
Fitness stipends for memberships, equipment, or athletic wear
Reimbursement or stipends for helpful apps for relaxation, fitness, and nutrition
Free or reduced-cost mental health counseling services
“It’s crucial for nurses to have access to mental health benefits,” said Cat Golden, BSN, RN, Partner at Nurse.com, in the report. “As a pediatric nurse who faced frequent encounters with the untimely death of young patients and their families’ grief, being able to speak with a therapist while on duty was vital for preserving my own mental well-being and played a pivotal role in my effectiveness as a nurse.”
Satisfaction and retention
Valuable insights into factors that contribute to nurses’ job satisfaction and the outlook for the nursing profession were also captured in the report. The highest percentage of nurses across all licensures (81%) rated regular merit increases as most important to their job satisfaction, followed by manager (62%), and ability to practice to the full scope of nursing practice (62%).
However, 23% of nurses across all license types were considering leaving nursing, according to the survey. The top-ranked reasons for leaving nursing were dissatisfaction with management (25%) and better pay (24%). This is a concerning statistic for nurses, patients, and the healthcare system.
What could encourage nurses to stay? The Nurse.com report identified the following top factors that could motivate nurses to stay in the profession:
Higher pay (66%)
Flexible scheduling (33%)
Better support for work-life balance (30%)
More reasonable workload (28%)
Being able to work in a remote role (25%)
Some of the revelations in the report may come as a surprise to nurses, while others may mirror how they feel about their careers and workplaces. However, all nurses can use the report to mold a better professional life for themselves.
Use the information in this report to:
Compare your salary and benefits to peers.
Determine when to negotiate salary.
Assess if pursuing additional training, a degree, or certification aligns with your career goals.
Identify challenges and shortcomings within your organization.
Initiate conversations with nursing leaders and advocate for a safer, healthier workplace. | [question]
Why is there such a big difference between the highest and lowest paid rns? surely experience cant make that much of a difference in pay?
=====================
[text]
RN salaries increased for most license types, but not by a generous amount, according to the report. The median RN salary reported by survey respondents was $80,000, an increase of $2,000 from the 2022 survey. The median salary for APRNs/ARNPs was $117,300, which is a decrease of $2,700 (about 2%) from the 2022 report. This could be due to the younger average age of respondents in this group of nurses.
The report also revealed that the gender pay gap for RNs has narrowed but hasn’t disappeared. The median salary for a male RN is $6,000 higher than the median salary for a female RN (compared with a $14,000 gap in the 2022 survey). Nurses’ responses helped identify some possible explanations for this salary gap, such as the higher percentage of male RNs working night shifts and negotiating their salary. However, the gap in male-female negotiating tendencies is closing, as more female RNs are becoming proactive in asking for higher pay.
“These findings surrounding salary negotiation are encouraging,” said Felicia Sadler, MJ, BSN, RN, CPHQ, LSSBB, Vice President of Quality and Partner at Relias, in the report. “But it’s important that organizations commit to structures and processes that ensure continuous process improvements. Despite the shrinking pay gap, ongoing organizational salary reviews and advocacy and awareness campaigns are needed to close the gap and keep it closed.”
Our findings also showed that education can positively impact nurse salaries. Across license types, 40% of nurses who earned certification said it resulted in a salary increase.
Workplace safety and wellness
For the first time, our survey asked nurses about their experiences with workplace violence and how their jobs affect their mental health and wellness, which are crucial factors for job satisfaction and retention. Unfortunately, many nurses said they have either witnessed or directly experienced workplace violence, which can have detrimental effects on their physical and mental health.
About 22% of nurses said their organization has either weekly or monthly instances of workplace violence, according to our survey. And that’s not all.
Almost one-third (31%) of nurses had been subjected to verbal abuse by a colleague.
64% had been subjected to verbal abuse by a patient or a patient’s family member.
23% had been physically assaulted or abused by a patient or a patient’s family member.
In addition, nurses across all licensures and age groups said the profession has affected their mental health and wellness. Nurses ages 18 to 34 were more likely to report experiencing burnout, ethical dilemmas and moral injury, and compassion fatigue than nurses from other age groups.
Wellness resources also remain important to nurses. Based on data from our report, the top three wellness resources nurses wanted were:
Fitness stipends for memberships, equipment, or athletic wear
Reimbursement or stipends for helpful apps for relaxation, fitness, and nutrition
Free or reduced-cost mental health counseling services
“It’s crucial for nurses to have access to mental health benefits,” said Cat Golden, BSN, RN, Partner at Nurse.com, in the report. “As a pediatric nurse who faced frequent encounters with the untimely death of young patients and their families’ grief, being able to speak with a therapist while on duty was vital for preserving my own mental well-being and played a pivotal role in my effectiveness as a nurse.”
Satisfaction and retention
Valuable insights into factors that contribute to nurses’ job satisfaction and the outlook for the nursing profession were also captured in the report. The highest percentage of nurses across all licensures (81%) rated regular merit increases as most important to their job satisfaction, followed by manager (62%), and ability to practice to the full scope of nursing practice (62%).
However, 23% of nurses across all license types were considering leaving nursing, according to the survey. The top-ranked reasons for leaving nursing were dissatisfaction with management (25%) and better pay (24%). This is a concerning statistic for nurses, patients, and the healthcare system.
What could encourage nurses to stay? The Nurse.com report identified the following top factors that could motivate nurses to stay in the profession:
Higher pay (66%)
Flexible scheduling (33%)
Better support for work-life balance (30%)
More reasonable workload (28%)
Being able to work in a remote role (25%)
Some of the revelations in the report may come as a surprise to nurses, while others may mirror how they feel about their careers and workplaces. However, all nurses can use the report to mold a better professional life for themselves.
Use the information in this report to:
Compare your salary and benefits to peers.
Determine when to negotiate salary.
Assess if pursuing additional training, a degree, or certification aligns with your career goals.
Identify challenges and shortcomings within your organization.
Initiate conversations with nursing leaders and advocate for a safer, healthier workplace.
https://www.nurse.com/blog/nurse-salary-and-work-life-reports-revelations/
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | How do I go about choosing the right benchmark to calculate alpha given a particular basket of stocks? Explain in 250 words like I'm a portfolio manager. | Evaluating the return of an investment without regard to the risk taken offers very little insight as to how a security or portfolio has really performed. Every security has a required rate of return, as specified by the capital asset pricing model (CAPM).
The Jensen index, or alpha, is what helps investors determine how much a portfolio's realized return differs from the return it should have achieved. This article will provide a deeper understanding of alpha and its practical application.
Key Takeaways
Alpha refers to excess returns earned on an investment above the benchmark return.
Active portfolio managers seek to generate alpha in diversified portfolios, with diversification intended to eliminate unsystematic risk.
Because alpha represents the performance of a portfolio relative to a benchmark, it is often considered to represent the value that a portfolio manager adds to or subtracts from a fund's return.
Jensen’s alpha takes into consideration the capital asset pricing model (CAPM) and includes a risk-adjusted component in its calculation.
Alpha Defined
Alpha is computed in relation to the capital asset pricing model. The CAPM equation is used to identify the required return of an investment; it is often used to evaluate realized performance for a diversified portfolio. Because it's assumed that the portfolio being evaluated is a diversified portfolio (meaning that the unsystematic risk has been eliminated), and because a diversified portfolio's main source of risk is the market risk (or systematic risk), beta is an appropriate measure of that risk.
Alpha is used to determine by how much the realized return of the portfolio varies from the required return, as determined by CAPM. The formula for alpha is expressed as follows:
1
α = Rp – [Rf + (Rm – Rf) β]
Where:
Rp = Realized return of portfolio
Rm = Market return
Rf = the risk-free rate
β = the asset's beta
What Does Alpha Measure?
Alpha measures risk premiums in terms of beta (β); therefore, it is assumed that the portfolio being evaluated is well diversified. The Jensen index requires using a different risk-free rate for each time interval measured during the specified period. For instance, if you are measuring the fund managers over a five-year period using annual intervals, you must examine the fund's annual returns minus the risk-free assets' returns (i.e., U.S. Treasury bill or one-year risk-free asset) for each year, and relate this to the annual return of the market portfolio minus the same risk-free rate.
This calculation method contrasts with both the Treynor and Sharpe measures in that both examine the average returns for the total period for all variables, which include the portfolio, market, and risk-free assets.
Alpha is a good measure of performance that compares the realized return with the return that should have been earned for the amount of risk borne by the investor. Technically speaking, it is a factor that represents the performance that diverges from a portfolio's beta, representing a measure of the manager's performance. For example, it's insufficient for an investor to consider the success or failure of a mutual fund merely by looking at its returns. The more relevant question is this: was the manager's performance sufficient to justify the risk taken to get said return?
Applying the Results
A positive alpha indicates the portfolio manager performed better than was expected based on the risk the manager took with the fund as measured by the fund's beta. A negative alpha means that the manager actually did worse than they should have given the required return of the portfolio.
1
The regression results usually cover a period between 36 and 60 months.
The Jensen index permits the comparison of portfolio managers' performance relative to one another, or relative to the market itself. When applying alpha, it's important to compare funds within the same asset class. Comparing funds from one asset class (i.e., large-cap growth) against a fund from another asset class (i.e., emerging markets) is meaningless because you are essentially comparing apples and oranges.
The chart below provides a good comparative example of alpha, or "excess returns." Investors can use both alpha and beta to judge a manager's performance.
Table 1
Fund Name Asset Class Ticker Alpha 3 Yr Beta 3 Yr Trailing Return 3 Yr Trailing Return 5 Yr
American Funds Growth Fund A Large Growth AGTHX 4.29 1.01 16.61 20.46
Fidelity Large Cap Growth Large Growth FSLGX 7.19 1.04 22.91 --
T. Rowe Price Growth Stock Large Growth PRGFX 5.14 1.03 17.67 21.54
Vanguard Growth Index Fund Admiral Shares Large Growth VIGAX 6.78 1.04 19.76 21.43
Table 1
The figures included in Table 1 indicate that on a risk-adjusted basis, the Fidelity Large Cap Growth yielded the best results of the funds listed. The three-year alpha of four exceeded those of its peers in the small sample provided above.
It's important to note that not only are comparisons among the same asset class appropriate but the right benchmark should also be considered. The benchmark most frequently used to measure the market is the S&P 500 stock index, which serves as a proxy for "the market."
However, some portfolios and mutual funds include asset classes with characteristics that do not accurately compare against the S&P 500, such as bond funds, sector funds, real estate, etc. Therefore, the S&P 500 may not be the appropriate benchmark to use in that case. So the alpha calculation would have to incorporate the relative benchmark for that asset class.
The Bottom Line
Portfolio performance encompasses both return and risk. The Jensen index, or alpha, provides us with a fair standard of manager performance. The results can help us determine whether the manager added value or even extra value on a risk-adjusted basis. If so, it also helps us determine whether the manager's fees were justified when reviewing the results. Buying (or even keeping) investment funds without this consideration is like buying a car to get you from Point A to Point B without evaluating its fuel efficiency. | [question]
How do I go about choosing the right benchmark to calculate alpha given a particular basket of stocks? Explain in 250 words like I'm a portfolio manager.
=====================
[text]
Evaluating the return of an investment without regard to the risk taken offers very little insight as to how a security or portfolio has really performed. Every security has a required rate of return, as specified by the capital asset pricing model (CAPM).
The Jensen index, or alpha, is what helps investors determine how much a portfolio's realized return differs from the return it should have achieved. This article will provide a deeper understanding of alpha and its practical application.
Key Takeaways
Alpha refers to excess returns earned on an investment above the benchmark return.
Active portfolio managers seek to generate alpha in diversified portfolios, with diversification intended to eliminate unsystematic risk.
Because alpha represents the performance of a portfolio relative to a benchmark, it is often considered to represent the value that a portfolio manager adds to or subtracts from a fund's return.
Jensen’s alpha takes into consideration the capital asset pricing model (CAPM) and includes a risk-adjusted component in its calculation.
Alpha Defined
Alpha is computed in relation to the capital asset pricing model. The CAPM equation is used to identify the required return of an investment; it is often used to evaluate realized performance for a diversified portfolio. Because it's assumed that the portfolio being evaluated is a diversified portfolio (meaning that the unsystematic risk has been eliminated), and because a diversified portfolio's main source of risk is the market risk (or systematic risk), beta is an appropriate measure of that risk.
Alpha is used to determine by how much the realized return of the portfolio varies from the required return, as determined by CAPM. The formula for alpha is expressed as follows:
1
α = Rp – [Rf + (Rm – Rf) β]
Where:
Rp = Realized return of portfolio
Rm = Market return
Rf = the risk-free rate
β = the asset's beta
What Does Alpha Measure?
Alpha measures risk premiums in terms of beta (β); therefore, it is assumed that the portfolio being evaluated is well diversified. The Jensen index requires using a different risk-free rate for each time interval measured during the specified period. For instance, if you are measuring the fund managers over a five-year period using annual intervals, you must examine the fund's annual returns minus the risk-free assets' returns (i.e., U.S. Treasury bill or one-year risk-free asset) for each year, and relate this to the annual return of the market portfolio minus the same risk-free rate.
This calculation method contrasts with both the Treynor and Sharpe measures in that both examine the average returns for the total period for all variables, which include the portfolio, market, and risk-free assets.
Alpha is a good measure of performance that compares the realized return with the return that should have been earned for the amount of risk borne by the investor. Technically speaking, it is a factor that represents the performance that diverges from a portfolio's beta, representing a measure of the manager's performance. For example, it's insufficient for an investor to consider the success or failure of a mutual fund merely by looking at its returns. The more relevant question is this: was the manager's performance sufficient to justify the risk taken to get said return?
Applying the Results
A positive alpha indicates the portfolio manager performed better than was expected based on the risk the manager took with the fund as measured by the fund's beta. A negative alpha means that the manager actually did worse than they should have given the required return of the portfolio.
1
The regression results usually cover a period between 36 and 60 months.
The Jensen index permits the comparison of portfolio managers' performance relative to one another, or relative to the market itself. When applying alpha, it's important to compare funds within the same asset class. Comparing funds from one asset class (i.e., large-cap growth) against a fund from another asset class (i.e., emerging markets) is meaningless because you are essentially comparing apples and oranges.
The chart below provides a good comparative example of alpha, or "excess returns." Investors can use both alpha and beta to judge a manager's performance.
Table 1
Fund Name Asset Class Ticker Alpha 3 Yr Beta 3 Yr Trailing Return 3 Yr Trailing Return 5 Yr
American Funds Growth Fund A Large Growth AGTHX 4.29 1.01 16.61 20.46
Fidelity Large Cap Growth Large Growth FSLGX 7.19 1.04 22.91 --
T. Rowe Price Growth Stock Large Growth PRGFX 5.14 1.03 17.67 21.54
Vanguard Growth Index Fund Admiral Shares Large Growth VIGAX 6.78 1.04 19.76 21.43
Table 1
The figures included in Table 1 indicate that on a risk-adjusted basis, the Fidelity Large Cap Growth yielded the best results of the funds listed. The three-year alpha of four exceeded those of its peers in the small sample provided above.
It's important to note that not only are comparisons among the same asset class appropriate but the right benchmark should also be considered. The benchmark most frequently used to measure the market is the S&P 500 stock index, which serves as a proxy for "the market."
However, some portfolios and mutual funds include asset classes with characteristics that do not accurately compare against the S&P 500, such as bond funds, sector funds, real estate, etc. Therefore, the S&P 500 may not be the appropriate benchmark to use in that case. So the alpha calculation would have to incorporate the relative benchmark for that asset class.
The Bottom Line
Portfolio performance encompasses both return and risk. The Jensen index, or alpha, provides us with a fair standard of manager performance. The results can help us determine whether the manager added value or even extra value on a risk-adjusted basis. If so, it also helps us determine whether the manager's fees were justified when reviewing the results. Buying (or even keeping) investment funds without this consideration is like buying a car to get you from Point A to Point B without evaluating its fuel efficiency.
https://www.investopedia.com/articles/financial-theory/08/deeper-look-at-alpha.asp
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Answers must only be provided from the text below. Sentences must be 9 words or less, no words longer than 8 characters. | how should kids operate the fridge freezer? | instructions
• Warnings and Important Safety Instructions in this manual
do not cover all possible conditions and situations that
may occur.
It is your responsibility to use common sense, caution,
and care when installing, maintaining, and operating your
appliance.
• Because these following operating instructions cover
various models, the characteristics of your refrigerator
may differ slightly from those described in this manual and
not all warning signs may be applicable. If you have any
questions or concerns, contact your nearest service center
or find help and information online at www.samsung.com.
• R-600a or R-134a is used as a refrigerant. Check the
compressor label on the rear of the appliance or the rating
label inside the fridge to see which refrigerant is used for
your appliance. When this product contains flammable gas
(Refrigerant R-600a), contact your local authority in regard
to safe disposal of this product.
• In order to avoid the creation of a flammable gas-air
mixture if a leak in the refrigerating circuit occurs, the size
of the room in which the appliance may be sited depends
on the amount of refrigerant used.
Safety information Untitled-1 4 2022-03-25 6:06:25
English 5
Safety information
• Never start up an appliance showing any signs of damage.
If in doubt, consult your dealer. The room must be 1 m3
in size for every 8 g of R-600a refrigerant inside the
appliance.
The amount of refrigerant in your particular appliance is
shown on the identification plate inside the appliance.
• Refrigerant squirting out of the pipes could ignite or cause
an eye injury. When refrigerant leaks from the pipe, avoid
any naked flames and move anything flammable away
from the product and ventilate the room immediately.
- Failing to do so may result in fire or explosion.
• To avoid contamination of food, please respect the
following instructions:
- Opening the door for long periods can cause a
significant increase of the temperature in the
compartments of the appliance.
- Clean regularly surfaces that can come in contact with
food and accessible drainage systems.
- Clean water tanks if they have not been used for 48 h;
flush the water system connected to a water supply if
water has not been drawn for 5 days.
- Store raw meat and fish in suitable containers in the
refrigerator, so that it is not in contact with or drip onto
other food.
- Two-star frozen-food compartments are suitable for
storing pre-frozen food, storing or making icecream and
making ice cubes.
- One-, two- and three-star compartments are not suitable
for the freezing of fresh food.
Untitled-1 5 2022-03-25 6:06:25
Safety information
6 English
Safety information
- If the refrigerating appliance is left empty for long
periods, switch off, defrost, clean, dry, and leave the
door open to prevent mould developing within the
appliance.
Important safety symbols and precautions:
Please follow all safety instructions in this manual. This
manual uses the following safety symbols.
WARNING
Hazards or unsafe practices that may result in severe personal
injury, property damage, and/or death.
CAUTION
Hazards or unsafe practices that may result in severe personal
injury and/or property damage.
NOTE
Useful information that helps users understand or benefit
from the refrigerator.
These warning signs are here to prevent injury to you and
others.
Please follow them carefully.
After reading this section, keep it in a safe place for future
reference.
Untitled-1 6 2022-03-25 6:06:25
English 7
Safety information
Important safety precautions
Warning; Risk of fire / flammable materials
WARNING
• When positioning the appliance, ensure the supply cord is
not trapped or damaged.
• Do not locate multiple portable socket-outlets or portable
power supplies at the rear of the appliance.
• Fill with potable water only.
• Connect to potable water supply only.
• Keep ventilation openings, in the appliance enclosure or in
the built-in structure, clear of obstruction.
• Do not use mechanical devices or any other means
to accelerate the defrosting process, other than those
recommended by the manufacturer.
• Do not damage the refrigerant circuit.
• Do not use electrical appliances inside the food storage
compartments of the appliance, unless they are of the type
recommended by the manufacturer.
• This appliance is not intended for use by persons
(including children) with reduced physical, sensory, or
mental capabilities, or those who lack experience and
knowledge, unless they have been given supervision or
instruction concerning the use of the appliance by a person
responsible for their safety | Answers must only be provided from the text below. Sentences must be 9 words or less, no words longer than 8 characters. how should kids operate the fridge freezer?
instructions
• Warnings and Important Safety Instructions in this manual
do not cover all possible conditions and situations that
may occur.
It is your responsibility to use common sense, caution,
and care when installing, maintaining, and operating your
appliance.
• Because these following operating instructions cover
various models, the characteristics of your refrigerator
may differ slightly from those described in this manual and
not all warning signs may be applicable. If you have any
questions or concerns, contact your nearest service center
or find help and information online at www.samsung.com.
• R-600a or R-134a is used as a refrigerant. Check the
compressor label on the rear of the appliance or the rating
label inside the fridge to see which refrigerant is used for
your appliance. When this product contains flammable gas
(Refrigerant R-600a), contact your local authority in regard
to safe disposal of this product.
• In order to avoid the creation of a flammable gas-air
mixture if a leak in the refrigerating circuit occurs, the size
of the room in which the appliance may be sited depends
on the amount of refrigerant used.
Safety information Untitled-1 4 2022-03-25 6:06:25
English 5
Safety information
• Never start up an appliance showing any signs of damage.
If in doubt, consult your dealer. The room must be 1 m3
in size for every 8 g of R-600a refrigerant inside the
appliance.
The amount of refrigerant in your particular appliance is
shown on the identification plate inside the appliance.
• Refrigerant squirting out of the pipes could ignite or cause
an eye injury. When refrigerant leaks from the pipe, avoid
any naked flames and move anything flammable away
from the product and ventilate the room immediately.
- Failing to do so may result in fire or explosion.
• To avoid contamination of food, please respect the
following instructions:
- Opening the door for long periods can cause a
significant increase of the temperature in the
compartments of the appliance.
- Clean regularly surfaces that can come in contact with
food and accessible drainage systems.
- Clean water tanks if they have not been used for 48 h;
flush the water system connected to a water supply if
water has not been drawn for 5 days.
- Store raw meat and fish in suitable containers in the
refrigerator, so that it is not in contact with or drip onto
other food.
- Two-star frozen-food compartments are suitable for
storing pre-frozen food, storing or making icecream and
making ice cubes.
- One-, two- and three-star compartments are not suitable
for the freezing of fresh food.
Untitled-1 5 2022-03-25 6:06:25
Safety information
6 English
Safety information
- If the refrigerating appliance is left empty for long
periods, switch off, defrost, clean, dry, and leave the
door open to prevent mould developing within the
appliance.
Important safety symbols and precautions:
Please follow all safety instructions in this manual. This
manual uses the following safety symbols.
WARNING
Hazards or unsafe practices that may result in severe personal
injury, property damage, and/or death.
CAUTION
Hazards or unsafe practices that may result in severe personal
injury and/or property damage.
NOTE
Useful information that helps users understand or benefit
from the refrigerator.
These warning signs are here to prevent injury to you and
others.
Please follow them carefully.
After reading this section, keep it in a safe place for future
reference.
Untitled-1 6 2022-03-25 6:06:25
English 7
Safety information
Important safety precautions
Warning; Risk of fire / flammable materials
WARNING
• When positioning the appliance, ensure the supply cord is
not trapped or damaged.
• Do not locate multiple portable socket-outlets or portable
power supplies at the rear of the appliance.
• Fill with potable water only.
• Connect to potable water supply only.
• Keep ventilation openings, in the appliance enclosure or in
the built-in structure, clear of obstruction.
• Do not use mechanical devices or any other means
to accelerate the defrosting process, other than those
recommended by the manufacturer.
• Do not damage the refrigerant circuit.
• Do not use electrical appliances inside the food storage
compartments of the appliance, unless they are of the type
recommended by the manufacturer.
• This appliance is not intended for use by persons
(including children) with reduced physical, sensory, or
mental capabilities, or those who lack experience and
knowledge, unless they have been given supervision or
instruction concerning the use of the appliance by a person
responsible for their safety |
Limit your response to presenting information contained within the text provided above - You are not permitted to use external resources. | What actually is the federal debt? | Deficits, Debt, and Interest
The annual differences between revenue (i.e., taxes and fees) that the government collects and
outlays (i.e., spending) result in the budget deficit (or surplus). Annual budget deficits or
surpluses determine, over time, the level of publicly held federal debt and affect the level of
interest payments to finance the debt.
Budget Deficits
Between FY2009 and FY2012, annual budgets as a percentage of GDP were sharply higher than
deficits in any period since FY1945.27 The unified budget deficit in FY2015 was $439 billion, or
2.5% of GDP—the lowest level since FY2007. The unified deficit, according to some budget
experts, gives an incomplete view of the government’s fiscal conditions because it includes off-
budget surpluses.28 Excluding off-budget items (Social Security benefits paid net of Social
Security payroll taxes collected and the U.S. Postal Service’s net balance), the on-budget FY2015
federal deficit was $466 billion.
Budget Deficit for FY2016
The January 2016 CBO baseline estimated the FY2016 budget deficit at $544 billion, or 2.9% of
GDP. The rise in the estimated budget deficit for FY2016 is the result of increases in spending
more than offsetting a smaller rise in revenues. FY2016 outlays are projected to increase to
21.2% of GDP, up from 20.7% of GDP in FY2015; revenues are projected to increase from
18.2% of GDP to 18.3% of GDP over the same period.
Federal Debt and Debt Limit
Gross federal debt is composed of debt held by the public and intragovernmental debt.
Intragovernmental debt is the amount owed by the federal government to other federal agencies,
to be paid by the Department of the Treasury, which mostly consists of money contained in trust
funds. Debt held by the public is the total amount the federal government has borrowed from the
public and remains outstanding. This measure is generally considered to be the most relevant in
macroeconomic terms because it is the debt sold in credit markets. Changes in debt held by the
public generally track the movements of the annual unified deficits and surpluses.29
Historically, Congress has set a ceiling on federal debt through a legislatively established limit.
The debt limit also imposes a form of fiscal accountability that compels Congress, in the form of
a vote authorizing a debt limit increase, and the President, by signing the legislation, to take
visible action to allow further federal borrowing when nearing the statutory limit.
The debt limit by itself has no effect on the borrowing needs of the government.30 The debt limit,
however, can hinder the Treasury’s ability to manage the federal government’s finances when the
amount of federal debt approaches this ceiling, or when the suspension expires. In those
instances, the Treasury has had to take extraordinary measures to meet federal obligations,
leading to inconvenience and uncertainty in Treasury operations at times.31 At the end of CY2015
(December 31, 2015), federal debt subject to limit was approximately $18.922 trillion, of which
$13.673 trillion was held by the public.32
The debt limit is currently suspended until March 15, 2017. Upon reinstatement, the debt limit
will be modified to exactly accommodate any increases in statutory debt subject to limit above
the previous limit ($18.1 trillion). At the end of calendar year 2015, total debt subject to limit was
$18.9 trillion. Barring advanced legislative action, the debt limit will be reached when reinstated,
so long as federal debt remains above the previous limit and continues to rise.
Net Interest
In FY2015, the United States spent $223 billion, or 1.3% of GDP, on net interest payments on the
debt. What the government pays in interest depends on market interest rates as well as on the size
and composition of the federal debt. Currently, low interest rates have held net interest payments
as a percentage of GDP below the historical average despite increases in borrowing to finance the
debt.33 Some economists, however, have expressed concern that federal interest costs could rise
once the economy fully recovers, resulting in future strain on the budget. Interest rates are
projected to gradually rise in the CBO baseline, resulting in net interest payments of $830 billion
(3.0% of GDP) in FY2026. If interest costs rise to this level, they will be higher than the
historical average. | Deficits, Debt, and Interest
The annual differences between revenue (i.e., taxes and fees) that the government collects and
outlays (i.e., spending) result in the budget deficit (or surplus). Annual budget deficits or
surpluses determine, over time, the level of publicly held federal debt and affect the level of
interest payments to finance the debt.
Budget Deficits
Between FY2009 and FY2012, annual budgets as a percentage of GDP were sharply higher than
deficits in any period since FY1945.27 The unified budget deficit in FY2015 was $439 billion, or
2.5% of GDP—the lowest level since FY2007. The unified deficit, according to some budget
experts, gives an incomplete view of the government’s fiscal conditions because it includes off-
budget surpluses.28 Excluding off-budget items (Social Security benefits paid net of Social
Security payroll taxes collected and the U.S. Postal Service’s net balance), the on-budget FY2015
federal deficit was $466 billion.
Budget Deficit for FY2016
The January 2016 CBO baseline estimated the FY2016 budget deficit at $544 billion, or 2.9% of
GDP. The rise in the estimated budget deficit for FY2016 is the result of increases in spending
more than offsetting a smaller rise in revenues. FY2016 outlays are projected to increase to
21.2% of GDP, up from 20.7% of GDP in FY2015; revenues are projected to increase from
18.2% of GDP to 18.3% of GDP over the same period.
Federal Debt and Debt Limit
Gross federal debt is composed of debt held by the public and intragovernmental debt.
Intragovernmental debt is the amount owed by the federal government to other federal agencies,
to be paid by the Department of the Treasury, which mostly consists of money contained in trust
funds. Debt held by the public is the total amount the federal government has borrowed from the
public and remains outstanding. This measure is generally considered to be the most relevant in
macroeconomic terms because it is the debt sold in credit markets. Changes in debt held by the
public generally track the movements of the annual unified deficits and surpluses.29
Historically, Congress has set a ceiling on federal debt through a legislatively established limit.
The debt limit also imposes a form of fiscal accountability that compels Congress, in the form of
a vote authorizing a debt limit increase, and the President, by signing the legislation, to take
visible action to allow further federal borrowing when nearing the statutory limit.
The debt limit by itself has no effect on the borrowing needs of the government.30 The debt limit,
however, can hinder the Treasury’s ability to manage the federal government’s finances when the
amount of federal debt approaches this ceiling, or when the suspension expires. In those
instances, the Treasury has had to take extraordinary measures to meet federal obligations,
leading to inconvenience and uncertainty in Treasury operations at times.31 At the end of CY2015
(December 31, 2015), federal debt subject to limit was approximately $18.922 trillion, of which
$13.673 trillion was held by the public.32
The debt limit is currently suspended until March 15, 2017. Upon reinstatement, the debt limit
will be modified to exactly accommodate any increases in statutory debt subject to limit above
the previous limit ($18.1 trillion). At the end of calendar year 2015, total debt subject to limit was
$18.9 trillion. Barring advanced legislative action, the debt limit will be reached when reinstated,
so long as federal debt remains above the previous limit and continues to rise.
Net Interest
In FY2015, the United States spent $223 billion, or 1.3% of GDP, on net interest payments on the
debt. What the government pays in interest depends on market interest rates as well as on the size
and composition of the federal debt. Currently, low interest rates have held net interest payments
as a percentage of GDP below the historical average despite increases in borrowing to finance the
debt.33 Some economists, however, have expressed concern that federal interest costs could rise
once the economy fully recovers, resulting in future strain on the budget. Interest rates are
projected to gradually rise in the CBO baseline, resulting in net interest payments of $830 billion
(3.0% of GDP) in FY2026. If interest costs rise to this level, they will be higher than the
historical average.
Limit your response to presenting information contained within the text provided above - You are not permitted to use external resources.
What actually is the federal debt? |
Use only the provided context block to find answers to the user prompt. Do not use external sources. | What did FCC have to do with 230? | Section 230 was enacted in 1996 in response to a trial court ruling that allowed an online platform to be subject to liability for hosting defamatory speech, in part because the platform had said it would police its site for unwanted speech. Congress was concerned that this ruling created a perverse incentive for sites to refrain from monitoring content to avoid liability. Section 230 can be seen as speech-protective: by barring lawsuits that would punish platforms for hosting speech, it may encourage platforms to err on the side of hosting more content, while still allowing sites to take down content they see as objectionable. To this end, Section 230 contains two different provisions that courts have generally viewed as two distinct liability shields.
First, Section 230(c)(1) states that interactive computer service providers and users may not “be treated as the publisher or speaker of any information provided by another” person. This provision has been broadly interpreted to bar a wide variety of suits that would treat service providers as the publisher of another’s content, including claims of defamation, negligence, discrimination under the Civil Rights Act of 1964, and state criminal prosecutions. However, if a site helps develop the unlawful content, courts have ruled that Section 230(c)(1) immunity does not apply. Accordingly, courts have, for example, rejected applying Section 230 to cases brought by the FTC against a defendant website that solicited or was involved in publishing allegedly unlawful content. More generally, Section 230 will not bar suits that seek to hold sites liable for their own conduct, rather than another’s content. But courts have said that acts inherent to publishing, such as reviewing, suggesting, and sometimes even editing content, may not, by themselves, qualify as helping develop the challenged content. As a consequence, Section 230(c)(1) immunity can apply regardless of whether the site chooses to actively police content or whether it chooses to take a more hands-off approach.
Second, Section 230(c)(2) provides that interactive computer service providers and users may not be “held liable” for any voluntary, “good faith” action “to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Section 230(c)(2) also immunizes providing “the technical means to restrict access” to objectionable material. Unlike Section 230(c)(1), Section 230(c)(2) applies only to good faith actions to restrict objectionable material. Courts have ruled that allegations of anticompetitive motives can demonstrate bad faith, disqualifying sites from claiming Section 230(c)(2) immunity. There are, however, relatively few published federal court cases interpreting this provision.
Because Section 230(c)(2) contains a good-faith requirement and Section 230(c)(1) does not, some courts have recognized the importance of determining when each immunity provision applies. At least one decision suggests that Section 230(c)(2) applies when a service provider “does filter out offensive material,” while Section 230(c)(1) applies when providers “refrain from filtering or censoring the information on their sites.” But, as one scholar has noted, other courts have cited Section 230(c)(1) when dismissing claims predicated on takedowns. Another possibility is that Section 230(c)(1) does not apply when the plaintiff’s own content is at issue—that is, while Section 230(c)(1) immunity only applies if a third party created the disputed content, Section 230(c)(2) can apply when a person sues a site for taking down the plaintiff’s own content. Again, however, other decisions suggest that courts may apply Section 230(c)(1) even when the suit involves the plaintiff’s own content. Athird view is that Section 230(c)(2) might apply if the provider helps develop content and is therefore ineligible for (c)(1) immunity. In short, court rulings are inconsistent on the question of when each of the two immunity provisions governs.
Section230(e)expressly states that the law will not bar liability in certain cases.Defendants may not claim Section 230 immunity in federal criminal prosecutions, cases involving intellectual property laws, suits under the Electronic Communications Privacy Act or “similar” state laws, and certain civil actions and state criminal prosecutions relating to sex trafficking.
If Section 230’s liability shield does not apply, the person being sued will not automatically be held liable. Instead, it means only that courts can continue to adjudicate the case. The EO begins by stating in Section 1 the President’s belief that online platforms are engaging in “selective censorship,” harming national discourse and restricting Americans’ speech. Section 2 turns to the interpretation of Section 230(c), arguing that the “scope” of this immunity provision “should be clarified” and the law should not be extended to platforms that “engage in deceptive or pretextual actions” to censor “certain viewpoints.” The EO maintains that Congress intended Section 230(c) to only protect service providers that engage in “Good Samaritan” blocking of harmful content. Section 2 further states that providers should not be entitled to Section 230(c)(2) immunity if they remove content without acting in “good faith,” including by taking “deceptive or pretextual actions (often contrary to their stated terms of service)” to suppress certain viewpoints.
Section 2 also directs the Commerce Secretary, “in consultation with the Attorney General, and acting through the National Telecommunications and Information Administration (NTIA),” to request the FCC to issue regulations interpreting Section 230. Among other things, the EO, perhaps in response to the Section 230 jurisprudence discussed above, specifies that FCC’s proposed regulations should clarify:
(1) “the interaction between” Section 230(c)(1) and (c)(2) to explain when a service provider that cannot obtain Section 230(c)(2) immunity is also ineligible for protection under (c)(1); and (2) the meaning of “good faith” in Section 230(c)(2), including whether violating terms of service or failing to provide procedural protections qualifies as bad faith.
Section 4 of the EO instructs the FTC to “consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices.” Specifically, the EO suggests that if platforms “restrict speech in ways that do not align with those entities’ public representations about” how they monitor content on their sites, these acts may qualify as unfair or deceptive practices under the FTC Act. The EO also directs the FTC to consider whether complaints of “online censorship” received by the White House “allege violations of law,” and whether to develop a report on these complaints.
The other provisions of the EO assign additional tasks to more executive departments. Section 3 of the EO requires agency review of federal spending on advertising and marketing on online platforms, and Sections 5 and 6 contain instructions for the Attorney General to establish a working group and propose federal legislation to implement the policies announced in the EO. | System instructions: [Use only the provided context block to find answers to the user prompt. Do not use external sources.]
User query: [What did FCC have to do with 230?]
Text:
[Section 230 was enacted in 1996 in response to a trial court ruling that allowed an online platform to be subject to liability for hosting defamatory speech, in part because the platform had said it would police its site for unwanted speech. Congress was concerned that this ruling created a perverse incentive for sites to refrain from monitoring content to avoid liability. Section 230 can be seen as speech-protective: by barring lawsuits that would punish platforms for hosting speech, it may encourage platforms to err on the side of hosting more content, while still allowing sites to take down content they see as objectionable. To this end, Section 230 contains two different provisions that courts have generally viewed as two distinct liability shields.
First, Section 230(c)(1) states that interactive computer service providers and users may not “be treated as the publisher or speaker of any information provided by another” person. This provision has been broadly interpreted to bar a wide variety of suits that would treat service providers as the publisher of another’s content, including claims of defamation, negligence, discrimination under the Civil Rights Act of 1964, and state criminal prosecutions. However, if a site helps develop the unlawful content, courts have ruled that Section 230(c)(1) immunity does not apply. Accordingly, courts have, for example, rejected applying Section 230 to cases brought by the FTC against a defendant website that solicited or was involved in publishing allegedly unlawful content. More generally, Section 230 will not bar suits that seek to hold sites liable for their own conduct, rather than another’s content. But courts have said that acts inherent to publishing, such as reviewing, suggesting, and sometimes even editing content, may not, by themselves, qualify as helping develop the challenged content. As a consequence, Section 230(c)(1) immunity can apply regardless of whether the site chooses to actively police content or whether it chooses to take a more hands-off approach.
Second, Section 230(c)(2) provides that interactive computer service providers and users may not be “held liable” for any voluntary, “good faith” action “to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Section 230(c)(2) also immunizes providing “the technical means to restrict access” to objectionable material. Unlike Section 230(c)(1), Section 230(c)(2) applies only to good faith actions to restrict objectionable material. Courts have ruled that allegations of anticompetitive motives can demonstrate bad faith, disqualifying sites from claiming Section 230(c)(2) immunity. There are, however, relatively few published federal court cases interpreting this provision.
Because Section 230(c)(2) contains a good-faith requirement and Section 230(c)(1) does not, some courts have recognized the importance of determining when each immunity provision applies. At least one decision suggests that Section 230(c)(2) applies when a service provider “does filter out offensive material,” while Section 230(c)(1) applies when providers “refrain from filtering or censoring the information on their sites.” But, as one scholar has noted, other courts have cited Section 230(c)(1) when dismissing claims predicated on takedowns. Another possibility is that Section 230(c)(1) does not apply when the plaintiff’s own content is at issue—that is, while Section 230(c)(1) immunity only applies if a third party created the disputed content, Section 230(c)(2) can apply when a person sues a site for taking down the plaintiff’s own content. Again, however, other decisions suggest that courts may apply Section 230(c)(1) even when the suit involves the plaintiff’s own content. Athird view is that Section 230(c)(2) might apply if the provider helps develop content and is therefore ineligible for (c)(1) immunity. In short, court rulings are inconsistent on the question of when each of the two immunity provisions governs.
Section230(e)expressly states that the law will not bar liability in certain cases.Defendants may not claim Section 230 immunity in federal criminal prosecutions, cases involving intellectual property laws, suits under the Electronic Communications Privacy Act or “similar” state laws, and certain civil actions and state criminal prosecutions relating to sex trafficking.
If Section 230’s liability shield does not apply, the person being sued will not automatically be held liable. Instead, it means only that courts can continue to adjudicate the case. The EO begins by stating in Section 1 the President’s belief that online platforms are engaging in “selective censorship,” harming national discourse and restricting Americans’ speech. Section 2 turns to the interpretation of Section 230(c), arguing that the “scope” of this immunity provision “should be clarified” and the law should not be extended to platforms that “engage in deceptive or pretextual actions” to censor “certain viewpoints.” The EO maintains that Congress intended Section 230(c) to only protect service providers that engage in “Good Samaritan” blocking of harmful content. Section 2 further states that providers should not be entitled to Section 230(c)(2) immunity if they remove content without acting in “good faith,” including by taking “deceptive or pretextual actions (often contrary to their stated terms of service)” to suppress certain viewpoints.
Section 2 also directs the Commerce Secretary, “in consultation with the Attorney General, and acting through the National Telecommunications and Information Administration (NTIA),” to request the FCC to issue regulations interpreting Section 230. Among other things, the EO, perhaps in response to the Section 230 jurisprudence discussed above, specifies that FCC’s proposed regulations should clarify:
(1) “the interaction between” Section 230(c)(1) and (c)(2) to explain when a service provider that cannot obtain Section 230(c)(2) immunity is also ineligible for protection under (c)(1); and (2) the meaning of “good faith” in Section 230(c)(2), including whether violating terms of service or failing to provide procedural protections qualifies as bad faith.
Section 4 of the EO instructs the FTC to “consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices.” Specifically, the EO suggests that if platforms “restrict speech in ways that do not align with those entities’ public representations about” how they monitor content on their sites, these acts may qualify as unfair or deceptive practices under the FTC Act. The EO also directs the FTC to consider whether complaints of “online censorship” received by the White House “allege violations of law,” and whether to develop a report on these complaints.
The other provisions of the EO assign additional tasks to more executive departments. Section 3 of the EO requires agency review of federal spending on advertising and marketing on online platforms, and Sections 5 and 6 contain instructions for the Attorney General to establish a working group and propose federal legislation to implement the policies announced in the EO.] |
Use only the document provided.
If the question can not be answered then respond with 'I am unable to answer this request' | Summarize the information from this paper. | BIAS IN POLICING
Bias in the American legal system includes biases in law enforcement or policing, where
racial disparities have long been documented and continue to persist. Compared with
White Americans, Black and Latino men are disproportionately more likely to be
stopped, searched, and arrested by police officers (Kahn & Martin 2016). Furthermore,
members of these minority groups also experience greater use of force by the police
(Goff & Kahn 2012, Kahn et al. 2016). Recently, a string of high-profile deadly cases
involving Black men like Michael Brown, Eric Garner, and Walter Scott has increased
public awareness of these hostile interactions with law enforcement. An initial analysis
of public records revealed that non-White minorities made up almost half (47%) of all
people killed by the police, despite comprising only 37% of the population. Furthermore,
of those killed, 32% of Blacks and 25% of Latinos were unarmed, compared with 15% of
Whites (Swaine et al. 2015). This troubling pattern of statistics has called into question
the role that race may play in police decisions.
Psychological research has examined this important social issue by directly
investigating the content of racial stereotypes, as well as indirectly assessing how these
associations affect perceptions and behavior. Self-report surveys have indicated that
hostility, violence, and criminality are commonly associated with Black Americans, even
by egalitarian-minded White Americans (Devine 1989, Devine & Elliot 1995, Dovidio et
al. 1986). Additionally, priming low-prejudiced individuals with Black versus White stimuli
typically results in the faster categorization of negative than positive attributes (e.g.,
Fazio et al. 1995, Greenwald et al. 1998, Wittenbrink et al. 1997). Together, these
findings suggest that awareness of social stereotypes and exposure to stigmatized
group members can affect decision making.
The Impact of Race on Weapon and Crime Perception
Applying the above rationale to police contexts, Payne (2001) developed the Weapons
Identification Task (WIT) to better understand the psychological mechanisms that may
drive racially biased shootings. This sequential priming procedure involves a series of
trials that begin with the presentation of a Black or White face, which participants are
instructed to ignore. After 200 ms, the prime is replaced by the target stimulus, which is
a picture of either a tool or a handgun. Participants must correctly categorize the object
as quickly as possible using one of two computer keys. Across two initial studies, Payne
(2001) found evidence of racial bias in both the reaction times and error rates. Following
the presentation of a Black versus White facial prime, participants were faster to
correctly identify a gun and more likely to misidentify a tool as a gun, depending on the
implementation of a response deadline. The results revealed that the racial primes had
an automatic influence on the visual identification of weapons (see also Amodio et al.
2004, Klauer & Voss 2008, Payne et al. 2002). As such, Payne (2001) proposed that
law enforcement officers may experience bias through the activation of Black
stereotypes, especially when the cognitive resources needed to engage behavioral
control are depleted.
Correll et al. (2002) extended this line of inquiry by developing a video game that
similarly examines the impact of race on weapon processing. In their first-person
Shooter Task, participants are randomly presented with a range of one to four real-life
photos of public spaces (e.g., parks, offices, courtyards). On the final image, a Black or
White male target suddenly appears superimposed holding either a handgun or an
innocuous object like a cell phone, soda can, or wallet. Participants must quickly press
either a “shoot” or “don't shoot” button on their computer keyboard. When participants
are given 850 ms to respond, they are faster to shoot armed Blacks versus Whites and
slower to not shoot unarmed Blacks compared with Whites. However, providing
participants with a 630-ms deadline results in a biased pattern of errors, such that
unarmed Blacks are more likely to be incorrectly shot than their White counterparts and
armed Whites are less likely to be shot than armed Black targets (see Correll et al.
2014, Mekawi & Bresin 2015). Biased responses were due to participants having lower
thresholds for shooting Black compared with White targets (see also Greenwald et al.
2003). Furthermore, the magnitude of shooter bias was related to cultural awareness of
Black stereotypes related to danger, violence, and aggression. Consequently, African
American participants demonstrated the same pattern of shooter bias, despite holding
presumably more positive attitudes about their group. These findings suggest that
decisions to shoot may be strongly influenced by negative racial schemas that affect
perceptions in ambiguous situations.
Additional research supports the notion that racial stereotypes may serve as perceptual
tuners that direct attention in a biased manner. Eberhardt et al. (2004) conducted a
series of studies examining how associations between Blacks and crime affected visual
processing. In their first study, undergraduates were subliminally primed with a photo of
a Black male face, a White male face, or no face at all before completing a supposedly
unrelated object detection task. On this critical task, severely degraded images of
crime-relevant (e.g., guns, knives) or -irrelevant (e.g., phones, keys) objects appeared
on the screen and slowly increased in clarity. Participants needed fewer frames to
accurately detect a crime-relevant object following a Black versus White or no-face
prime, a pattern of bias that was not related to their explicit racial attitudes. These
results were replicated among California police officers who were primed with crime
words (e.g., arrest, shoot) and then tested for their memory of the distractor face
presented on the task. Compared with the correct image, officers were more likely to
incorrectly choose a Black target with more stereotypical features following the crime
primes. Early perceptual processes of the police may therefore be impacted by cultural
associations that produce racial profiling of suspects and bias their subsequent
treatment.
Plant & Peruche (2005) also used actual law enforcement officers in their research to
examine how race influenced their responses to criminal suspects. Police officers
completed a more static version of the Shooter Task in which only photos of Black or
White male faces appeared with a gun or object superimposed without a background
image. The researchers wanted to examine whether repeated exposure to the program
would reduce the bias expressed by the officers. As in past studies with undergraduate
participants (e.g., Correll et al. 2002), the police were initially more likely to shoot
unarmed Black versus White targets and had a lower threshold for shooting Black
targets. However, this biased tendency disappeared in the second half of trials,
signifying that officers learned to dissociate race from the presence of weapons to make
more accurate decisions on the task.
The potential benefit of expert police training on performance is further supported by the
findings of Correll et al. (2007b), who compared the performance of three different
samples: Denver community members, Denver police officers, and national police
officers. In contrast to citizens who demonstrated bias in both their reaction time and
error rates, police officers demonstrated it only in their response latencies. In other
words, police officers did not make racially biased mistakes on the task but were still
faster to shoot armed Black men and slower to not shoot unarmed Black targets. This
shooter bias was more pronounced among officers serving high-crime areas with larger
Black and minority populations. The findings suggest that exposure to negative racial
stereotypes can impact the speed with which police officers make decisions, but that
their extensive training and field experience may allow them to exert more control over
their behavior than regular citizens.
In sum, independent labs have accumulated a considerable amount of evidence that
race can impact crime-oriented perceptions and bias subsequent decision making. Yet,
findings are often mixed when comparing data obtained from police officers versus
undergraduate or civilian samples. Under certain circumstances, the police express a
similar magnitude of racial bias as individuals not in law enforcement; in other
situations, their prior experience helps them limit the influence of stereotypes.
Beyond the Impact of Race
The mixed results discussed above point to the importance of conducting research that
considers factors other than race to more fully understand the complexity of real-life
police decision making. To this end, some studies have explored how personal
motivations, situational contexts, and physical cues may attenuate or exacerbate the
expression of racial bias.
Personal motivation. Research that has examined motivational processes
demonstrates that responses to race are not uniformly biased. For example, Payne
(2001) found that motivation to control prejudice moderated the relationship between
explicit measures of bias and performance on the WIT. Participants with low motivation
to control prejudice tended to show a positive correlation between modern scores of
racism and task performance. However, those with higher motivation levels tended to
show a dissociation between explicit and implicit bias, indicating a regulatory effort to
override stereotyping effects. Similarly, Amodio and colleagues (2006, 2008) have
examined the impact of internal (personal) versus external (normative) motivations to
respond without prejudice. Participants in their studies completed the WIT while having
their brain activity recorded. The data indicated that internally motivated participants
responded more accurately on the task, particularly following stereotypical errors.
Because this neural activity occurred below conscious awareness, the researchers
proposed that some individuals are able to engage a spontaneous form of control that
helps reduce the influence of race on behavior.
In contrast, Swencionis & Goff (2017) proposed that the motivation to view the world in
hierarchical terms may increase bias in police decisions. Social Dominance Theory
(Sidanius & Pratto 1999) posits that group-based inequalities are maintained by cultural
influences that promote social stratification based on factors such as age, sex, and
race. Consequently, power is primarily distributed to and legitimized by high-status
groups and institutions. Past work has found that people with high social dominance
orientation (SDO) are more attracted to hierarchy-enhancing professions, such as law
enforcement, politics, and business (Sidanius et al. 2004). Given that police officers
tend to report greater SDO levels than public defenders, college students, or community
members (Sidanius et al. 1994), they may be more prone to expressing discrimination
against low-status groups.
Situational contexts. Recognizing that police decisions do not occur in a social
vacuum, some researchers have attempted to recreate ecologically valid situations that
may contribute to the expression of racial bias. For example, Correll et al. (2007a)
reasoned that frequent media or environmental exposure to stereotypical depictions of
Blacks may increase shooter bias. In line with their hypothesizing, they found that
participants who were first exposed to stories involving Black versus White criminal
activity later showed more bias on the Shooter Task. A similar pattern emerged when
they manipulated the amount of Black armed and White unarmed targets appearing on
the task. Thus, increasing the accessibility of associations between Blacks and danger
resulted in more pronounced anti-Black bias.
Cox et al. (2014) also argued for the use of more complex situational contexts to assess
various psychological factors that influence real-life decisions. To this end, they
developed a modified version of the Shooter Task that used short video clips along with
static photos of the suspect and recorded responses through a gun apparatus instead of
computer keys. Because the police usually have prior knowledge and expectations
about neighborhoods, they also manipulated where the crimes on the task supposedly
took place by providing the exact city location. Wisconsin police officers were randomly
assigned to complete the task imbedded within a primarily White or non-White
neighborhood. When examining responses on photo trials, the researchers found that
police officers did not make racially biased errors but were faster to shoot armed Black
versus White targets, as in the work by Correll et al. (2007b). Interestingly, they also
found that the composition of the neighborhood interacted with the race of the officers,
such that more errors were made when officers were assigned to other-race areas.
| Use only the document provided.
If the question can not be answered then respond with 'I am unable to answer this request'
Summarize the information from this paper.
BIAS IN POLICING
Bias in the American legal system includes biases in law enforcement or policing, where
racial disparities have long been documented and continue to persist. Compared with
White Americans, Black and Latino men are disproportionately more likely to be
stopped, searched, and arrested by police officers (Kahn & Martin 2016). Furthermore,
members of these minority groups also experience greater use of force by the police
(Goff & Kahn 2012, Kahn et al. 2016). Recently, a string of high-profile deadly cases
involving Black men like Michael Brown, Eric Garner, and Walter Scott has increased
public awareness of these hostile interactions with law enforcement. An initial analysis
of public records revealed that non-White minorities made up almost half (47%) of all
people killed by the police, despite comprising only 37% of the population. Furthermore,
of those killed, 32% of Blacks and 25% of Latinos were unarmed, compared with 15% of
Whites (Swaine et al. 2015). This troubling pattern of statistics has called into question
the role that race may play in police decisions.
Psychological research has examined this important social issue by directly
investigating the content of racial stereotypes, as well as indirectly assessing how these
associations affect perceptions and behavior. Self-report surveys have indicated that
hostility, violence, and criminality are commonly associated with Black Americans, even
by egalitarian-minded White Americans (Devine 1989, Devine & Elliot 1995, Dovidio et
al. 1986). Additionally, priming low-prejudiced individuals with Black versus White stimuli
typically results in the faster categorization of negative than positive attributes (e.g.,
Fazio et al. 1995, Greenwald et al. 1998, Wittenbrink et al. 1997). Together, these
findings suggest that awareness of social stereotypes and exposure to stigmatized
group members can affect decision making.
The Impact of Race on Weapon and Crime Perception
Applying the above rationale to police contexts, Payne (2001) developed the Weapons
Identification Task (WIT) to better understand the psychological mechanisms that may
drive racially biased shootings. This sequential priming procedure involves a series of
trials that begin with the presentation of a Black or White face, which participants are
instructed to ignore. After 200 ms, the prime is replaced by the target stimulus, which is
a picture of either a tool or a handgun. Participants must correctly categorize the object
as quickly as possible using one of two computer keys. Across two initial studies, Payne
(2001) found evidence of racial bias in both the reaction times and error rates. Following
the presentation of a Black versus White facial prime, participants were faster to
correctly identify a gun and more likely to misidentify a tool as a gun, depending on the
implementation of a response deadline. The results revealed that the racial primes had
an automatic influence on the visual identification of weapons (see also Amodio et al.
2004, Klauer & Voss 2008, Payne et al. 2002). As such, Payne (2001) proposed that
law enforcement officers may experience bias through the activation of Black
stereotypes, especially when the cognitive resources needed to engage behavioral
control are depleted.
Correll et al. (2002) extended this line of inquiry by developing a video game that
similarly examines the impact of race on weapon processing. In their first-person
Shooter Task, participants are randomly presented with a range of one to four real-life
photos of public spaces (e.g., parks, offices, courtyards). On the final image, a Black or
White male target suddenly appears superimposed holding either a handgun or an
innocuous object like a cell phone, soda can, or wallet. Participants must quickly press
either a “shoot” or “don't shoot” button on their computer keyboard. When participants
are given 850 ms to respond, they are faster to shoot armed Blacks versus Whites and
slower to not shoot unarmed Blacks compared with Whites. However, providing
participants with a 630-ms deadline results in a biased pattern of errors, such that
unarmed Blacks are more likely to be incorrectly shot than their White counterparts and
armed Whites are less likely to be shot than armed Black targets (see Correll et al.
2014, Mekawi & Bresin 2015). Biased responses were due to participants having lower
thresholds for shooting Black compared with White targets (see also Greenwald et al.
2003). Furthermore, the magnitude of shooter bias was related to cultural awareness of
Black stereotypes related to danger, violence, and aggression. Consequently, African
American participants demonstrated the same pattern of shooter bias, despite holding
presumably more positive attitudes about their group. These findings suggest that
decisions to shoot may be strongly influenced by negative racial schemas that affect
perceptions in ambiguous situations.
Additional research supports the notion that racial stereotypes may serve as perceptual
tuners that direct attention in a biased manner. Eberhardt et al. (2004) conducted a
series of studies examining how associations between Blacks and crime affected visual
processing. In their first study, undergraduates were subliminally primed with a photo of
a Black male face, a White male face, or no face at all before completing a supposedly
unrelated object detection task. On this critical task, severely degraded images of
crime-relevant (e.g., guns, knives) or -irrelevant (e.g., phones, keys) objects appeared
on the screen and slowly increased in clarity. Participants needed fewer frames to
accurately detect a crime-relevant object following a Black versus White or no-face
prime, a pattern of bias that was not related to their explicit racial attitudes. These
results were replicated among California police officers who were primed with crime
words (e.g., arrest, shoot) and then tested for their memory of the distractor face
presented on the task. Compared with the correct image, officers were more likely to
incorrectly choose a Black target with more stereotypical features following the crime
primes. Early perceptual processes of the police may therefore be impacted by cultural
associations that produce racial profiling of suspects and bias their subsequent
treatment.
Plant & Peruche (2005) also used actual law enforcement officers in their research to
examine how race influenced their responses to criminal suspects. Police officers
completed a more static version of the Shooter Task in which only photos of Black or
White male faces appeared with a gun or object superimposed without a background
image. The researchers wanted to examine whether repeated exposure to the program
would reduce the bias expressed by the officers. As in past studies with undergraduate
participants (e.g., Correll et al. 2002), the police were initially more likely to shoot
unarmed Black versus White targets and had a lower threshold for shooting Black
targets. However, this biased tendency disappeared in the second half of trials,
signifying that officers learned to dissociate race from the presence of weapons to make
more accurate decisions on the task.
The potential benefit of expert police training on performance is further supported by the
findings of Correll et al. (2007b), who compared the performance of three different
samples: Denver community members, Denver police officers, and national police
officers. In contrast to citizens who demonstrated bias in both their reaction time and
error rates, police officers demonstrated it only in their response latencies. In other
words, police officers did not make racially biased mistakes on the task but were still
faster to shoot armed Black men and slower to not shoot unarmed Black targets. This
shooter bias was more pronounced among officers serving high-crime areas with larger
Black and minority populations. The findings suggest that exposure to negative racial
stereotypes can impact the speed with which police officers make decisions, but that
their extensive training and field experience may allow them to exert more control over
their behavior than regular citizens.
In sum, independent labs have accumulated a considerable amount of evidence that
race can impact crime-oriented perceptions and bias subsequent decision making. Yet,
findings are often mixed when comparing data obtained from police officers versus
undergraduate or civilian samples. Under certain circumstances, the police express a
similar magnitude of racial bias as individuals not in law enforcement; in other
situations, their prior experience helps them limit the influence of stereotypes.
Beyond the Impact of Race
The mixed results discussed above point to the importance of conducting research that
considers factors other than race to more fully understand the complexity of real-life
police decision making. To this end, some studies have explored how personal
motivations, situational contexts, and physical cues may attenuate or exacerbate the
expression of racial bias.
Personal motivation. Research that has examined motivational processes
demonstrates that responses to race are not uniformly biased. For example, Payne
(2001) found that motivation to control prejudice moderated the relationship between
explicit measures of bias and performance on the WIT. Participants with low motivation
to control prejudice tended to show a positive correlation between modern scores of
racism and task performance. However, those with higher motivation levels tended to
show a dissociation between explicit and implicit bias, indicating a regulatory effort to
override stereotyping effects. Similarly, Amodio and colleagues (2006, 2008) have
examined the impact of internal (personal) versus external (normative) motivations to
respond without prejudice. Participants in their studies completed the WIT while having
their brain activity recorded. The data indicated that internally motivated participants
responded more accurately on the task, particularly following stereotypical errors.
Because this neural activity occurred below conscious awareness, the researchers
proposed that some individuals are able to engage a spontaneous form of control that
helps reduce the influence of race on behavior.
In contrast, Swencionis & Goff (2017) proposed that the motivation to view the world in
hierarchical terms may increase bias in police decisions. Social Dominance Theory
(Sidanius & Pratto 1999) posits that group-based inequalities are maintained by cultural
influences that promote social stratification based on factors such as age, sex, and
race. Consequently, power is primarily distributed to and legitimized by high-status
groups and institutions. Past work has found that people with high social dominance
orientation (SDO) are more attracted to hierarchy-enhancing professions, such as law
enforcement, politics, and business (Sidanius et al. 2004). Given that police officers
tend to report greater SDO levels than public defenders, college students, or community
members (Sidanius et al. 1994), they may be more prone to expressing discrimination
against low-status groups.
Situational contexts. Recognizing that police decisions do not occur in a social
vacuum, some researchers have attempted to recreate ecologically valid situations that
may contribute to the expression of racial bias. For example, Correll et al. (2007a)
reasoned that frequent media or environmental exposure to stereotypical depictions of
Blacks may increase shooter bias. In line with their hypothesizing, they found that
participants who were first exposed to stories involving Black versus White criminal
activity later showed more bias on the Shooter Task. A similar pattern emerged when
they manipulated the amount of Black armed and White unarmed targets appearing on
the task. Thus, increasing the accessibility of associations between Blacks and danger
resulted in more pronounced anti-Black bias.
Cox et al. (2014) also argued for the use of more complex situational contexts to assess
various psychological factors that influence real-life decisions. To this end, they
developed a modified version of the Shooter Task that used short video clips along with
static photos of the suspect and recorded responses through a gun apparatus instead of
computer keys. Because the police usually have prior knowledge and expectations
about neighborhoods, they also manipulated where the crimes on the task supposedly
took place by providing the exact city location. Wisconsin police officers were randomly
assigned to complete the task imbedded within a primarily White or non-White
neighborhood. When examining responses on photo trials, the researchers found that
police officers did not make racially biased errors but were faster to shoot armed Black
versus White targets, as in the work by Correll et al. (2007b). Interestingly, they also
found that the composition of the neighborhood interacted with the race of the officers,
such that more errors were made when officers were assigned to other-race areas.
|
Please respond to this prompt ONLY using the information provided in the context block. | According to the provided text, how does virtual memory improve the efficiency of real physical memory (RAM) usage in computer systems? | Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory (an address space), while in fact it may be
physically fragmented and may even overflow on to disk storage. Systems that use this technique make programming of large applications easier and use real physical memory (e.g.
RAM) more efficiently than those without virtual memory.
http://en.wikipedia.org/wiki/Virtual_memory
Page Fault: A page is a fixed-length block of memory that is used as a unit of transfer between physical memory and external storage like a disk, and a page fault is an interrupt (or
exception) to the software raised by the hardware, when a program accesses a page that is
mapped in address space, but not loaded in physical memory.
http://en.wikipedia.org/wiki/Page_fault
Thrash is the term used to describe a degenerate situation on a computer where increasing resources are used to do a decreasing amount of work. In this situation the system is
said to be thrashing. Usually it refers to two or more processes accessing a shared resource
repeatedly such that serious system performance degradation occurs because the system is
spending a disproportionate amount of time just accessing the shared resource. Resource
access time may generally be considered as wasted, since it does not contribute to the advancement of any process. In modern computers, thrashing may occur in the paging system
(if there is not ‘sufficient’ physical memory or the disk access time is overly long), or in the
communications system (especially in conflicts over internal bus access), etc.
http://en.wikipedia.org/wiki/Thrash_(computer_science) | Please respond to this prompt ONLY using the information provided in the context block.
According to the provided text, how does virtual memory improve the efficiency of real physical memory (RAM) usage in computer systems?
Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory (an address space), while in fact it may be
physically fragmented and may even overflow on to disk storage. Systems that use this technique make programming of large applications easier and use real physical memory (e.g.
RAM) more efficiently than those without virtual memory.
http://en.wikipedia.org/wiki/Virtual_memory
Page Fault: A page is a fixed-length block of memory that is used as a unit of transfer between physical memory and external storage like a disk, and a page fault is an interrupt (or
exception) to the software raised by the hardware, when a program accesses a page that is
mapped in address space, but not loaded in physical memory.
http://en.wikipedia.org/wiki/Page_fault
Thrash is the term used to describe a degenerate situation on a computer where increasing resources are used to do a decreasing amount of work. In this situation the system is
said to be thrashing. Usually it refers to two or more processes accessing a shared resource
repeatedly such that serious system performance degradation occurs because the system is
spending a disproportionate amount of time just accessing the shared resource. Resource
access time may generally be considered as wasted, since it does not contribute to the advancement of any process. In modern computers, thrashing may occur in the paging system
(if there is not ‘sufficient’ physical memory or the disk access time is overly long), or in the
communications system (especially in conflicts over internal bus access), etc.
http://en.wikipedia.org/wiki/Thrash_(computer_science) |
Respond using only information from the provided content.
Adhere to a 300-word limit.
Avoid responding in table format or JSON | According to the article, if you are unable to obtain a normal credit card, what are your alternatives? | **Ten steps to rebuild your credit**
There are many ways to rebuild credit, and the most effective options vary from person to person. Use this list of options as a starting point to make your own personalized plan to rebuild your credit.
1. Get current with payments
Before you do anything else to rebuild credit, make sure every account you have is current (not behind on payments). Accounts that are more than 30 days past due can do serious damage to your credit, and the later they get, the worse the damage will be.
Plus, outstanding balances can mean late fees and interest fees get piled on top of your existing debt. The longer you're behind, the more expensive it'll be to catch up.
If you're struggling to get current on your credit cards, make sure to contact your issuers. In most cases, a credit card issuer will work with you to establish a payment plan. After all, it's in the issuer's best interest for you to repay that debt.
2. Pay down high balances
One of the key factors used in credit scoring is called your credit utilization ratio. This is the ratio of how much credit card debt you have (amounts owed) versus your total available credit. For example, a credit card with a balance of $500 and a credit limit of $1,000 has a utilization ratio of of 50% ($500 / $1,000 = 0.5 x 10).
High utilization (being close to your credit limits) is a warning sign to creditors that you may have taken on too much debt and could be living beyond your means. As a result, high utilization can hurt your credit score. One way to rebuild credit is to pay down those balances. The general rule of thumb is to keep your utilization below 30%. Try not to owe more than one-third of your credit limit; keeping your limit below 10% is ideal.
Building a budget that prioritizes debt repayment is often the best method for paying down high balances. See below for more information on budgets, or our guide to paying off debt for our top tips.
If you're already living on a tight budget, a debt consolidation loan may be a good way to pay down credit cards and boost your credit score. Opening a debt consolidation loan can bring down your score in the short term, but can benefit your score in the long term. If changing your budget isn't an option, it's worth investigating whether a debt consolidation loan is right for you.
3. Pay on time, every time
Every time you make a credit card payment and the issuer reports your payment to the credit bureaus, you are contributing to your payment history. Your payment history is the most important part of your credit score.
No plan to rebuild credit will work if you aren't making payments on time. Make at least your minimum required payment by the due date every single month. (Ideally, pay off your entire balance every month to avoid credit card interest fees.)
4. Activate automatic payments
If you're having trouble remembering due dates, you can let the credit card company take care of it for you. Most banks and issuers will allow you to set up automatic payments. You can choose the amount you want to pay -- make sure it's at least your minimum payment -- as well as when you want the payments to process.
If you don't have automatic payments set to cover your entire monthly bill, be sure to follow up with additional payments to pay your full balance.
5. Keep balances low
As mentioned, your utilization rate has a lot of influence on your credit score. Once you've paid down your outstanding balances, make sure to keep them low.
You'll struggle to rebuild credit if you keep running up your credit card balances after paying them down. People with excellent credit tend to have utilization rates below 10%.
6. Open a secured credit card
The only surefire way to rebuild credit is to have a recent positive payment history. Of course, if your credit is heavily damaged, you may have trouble qualifying for a credit card with which to build that payment history. This is where a secured credit card can help.
Secured credit cards can be pretty easy to get, even if your credit is damaged. That's because secured cards rely on a cash security deposit to minimize risk to the issuer. If you pay off your balance in full, you'll get the security deposit back when you close your account. Some issuers will even automatically upgrade you to an unsecured account and return your deposit.
7. Become an authorized user on someone else's card
Another way to rebuild credit is to be added as an authorized user on another person's card (such as a trusted family member). When you become an authorized user on someone else's credit card, you receive your own credit card with your name. But the credit account is still the responsibility of the primary account holder. The card company typically reports the credit card account to the credit bureaus for both the primary account holder and the authorized user.
As long as the account is in good standing, being added as an authorized user can help raise your credit score.
Being an authorized user isn't without risks, however. For example, if the cardholder or the authorized user runs up a high balance, both users could see credit score damage. Only tie your credit score to individuals you trust.
8. Build a budget -- and stick to it
Any plan to rebuild credit score damage is sure to fail if you don't address the root of the problem. In many cases, the root cause boils down to the lack of a budget -- and yes, that means a realistic budget, not an idealized one. If your budget doesn't reflect your actual lifestyle, spending, and debts, it'll be useless.
A good budget can help you repay debts and keep from overextending yourself in the future.
RELATED: Best Budgeting Apps
9. Keep an eye on your credit reports and scores
As you work to rebuild credit, be sure to check up on your credit reports and scores regularly. Many credit card issuers offer free monthly credit scores, especially on credit-building products. You can also get free copies of your credit reports from each of the three bureaus once a year through AnnualCreditReport.com.
If you find any errors on your reports, be sure to dispute them quickly with the credit bureaus.
10. Give it time
Like it or not, sometimes time is the only way to rebuild credit. Those delinquent payments and defaulted accounts aren't going anywhere fast.
It can take years of building a positive payment history to recover from big mistakes -- especially when those mistakes can sit on your reports for up to seven years. If you're doing everything right to rebuild your credit, but you're not seeing much movement in your credit scores: be patient. Time -- and keeping on top of your payments -- heals most credit wounds.
How long will it take to rebuild my credit?
Every credit profile is unique. As a result, the best strategy for rebuilding credit will depend on your credit history and the reasons for your credit problems. In other words, the answer to the question of "How long does it take to rebuild credit?" is: It depends.
A low credit score caused by high credit card balances can be the quickest thing to fix (assuming you have the funds to pay them off). Paying down high balances can help you rebuild credit in 30 days or less.
On the other hand, if you need to rebuild credit due to late payments or a defaulted account, you're probably going to need longer. It will take at least six to 12 months to rebuild credit scores to an acceptable level -- and several years for the negative items to disappear altogether.
In fact, negative items can linger on your credit reports for up to 10 years in certain cases (primarily bankruptcy), with most negative items having a shelf life of seven years. On the bright side, negative items impact your credit scores less as they age, particularly when you've been building a positive payment history in the meantime.
How can I raise my credit score by 100 points?
If you have very high credit utilization -- meaning you're close to your credit limits -- paying down your balances could provide a large credit score boost. Credit scores damaged by credit report errors can also jump quite a bit when those errors are removed.
Outside of these situations, however, you'll typically need to rebuild credit over many months to see a gain of 100 points or more. There is no guaranteed way to raise your credit score by a specific amount -- and 100 points is a lot to expect. For example, a 100-point jump from 570 to 670 moves you from bad credit into fair credit.
What type of credit cards work for rebuilding credit?
The best credit cards to rebuild credit have minimal costs and report your payments to the credit bureaus each month. This means cards with affordable annual fees -- or, even better, credit cards with no annual fees -- and the option to make automatic payments.
Wondering where to start? A number of credit cards for fair or average credit won't charge a fee. If you can't qualify for an unsecured card without an annual fee, consider a secured credit card instead.
Secured cards differ from traditional cards in one key way: the deposit. Secured credit cards require an upfront cash deposit to open and maintain. This makes them safer for the issuer. Even if you have significant credit damage, you can likely find an issuer willing to offer you a secured credit card.
Banks where you already have a good reputation and your local credit union are often the best places to find a secured card. Look for one that might allow you to graduate easily to an unsecured card (and avoid annual fees, if possible). | query: According to the article, if you are unable to obtain a normal credit card, what are your alternatives?
----------
task instruction: Respond using only information from the provided content.
Adhere to a 300-word limit.
Avoid responding in table format or JSON
----------
passage: **Ten steps to rebuild your credit**
There are many ways to rebuild credit, and the most effective options vary from person to person. Use this list of options as a starting point to make your own personalized plan to rebuild your credit.
1. Get current with payments
Before you do anything else to rebuild credit, make sure every account you have is current (not behind on payments). Accounts that are more than 30 days past due can do serious damage to your credit, and the later they get, the worse the damage will be.
Plus, outstanding balances can mean late fees and interest fees get piled on top of your existing debt. The longer you're behind, the more expensive it'll be to catch up.
If you're struggling to get current on your credit cards, make sure to contact your issuers. In most cases, a credit card issuer will work with you to establish a payment plan. After all, it's in the issuer's best interest for you to repay that debt.
2. Pay down high balances
One of the key factors used in credit scoring is called your credit utilization ratio. This is the ratio of how much credit card debt you have (amounts owed) versus your total available credit. For example, a credit card with a balance of $500 and a credit limit of $1,000 has a utilization ratio of of 50% ($500 / $1,000 = 0.5 x 10).
High utilization (being close to your credit limits) is a warning sign to creditors that you may have taken on too much debt and could be living beyond your means. As a result, high utilization can hurt your credit score. One way to rebuild credit is to pay down those balances. The general rule of thumb is to keep your utilization below 30%. Try not to owe more than one-third of your credit limit; keeping your limit below 10% is ideal.
Building a budget that prioritizes debt repayment is often the best method for paying down high balances. See below for more information on budgets, or our guide to paying off debt for our top tips.
If you're already living on a tight budget, a debt consolidation loan may be a good way to pay down credit cards and boost your credit score. Opening a debt consolidation loan can bring down your score in the short term, but can benefit your score in the long term. If changing your budget isn't an option, it's worth investigating whether a debt consolidation loan is right for you.
3. Pay on time, every time
Every time you make a credit card payment and the issuer reports your payment to the credit bureaus, you are contributing to your payment history. Your payment history is the most important part of your credit score.
No plan to rebuild credit will work if you aren't making payments on time. Make at least your minimum required payment by the due date every single month. (Ideally, pay off your entire balance every month to avoid credit card interest fees.)
4. Activate automatic payments
If you're having trouble remembering due dates, you can let the credit card company take care of it for you. Most banks and issuers will allow you to set up automatic payments. You can choose the amount you want to pay -- make sure it's at least your minimum payment -- as well as when you want the payments to process.
If you don't have automatic payments set to cover your entire monthly bill, be sure to follow up with additional payments to pay your full balance.
5. Keep balances low
As mentioned, your utilization rate has a lot of influence on your credit score. Once you've paid down your outstanding balances, make sure to keep them low.
You'll struggle to rebuild credit if you keep running up your credit card balances after paying them down. People with excellent credit tend to have utilization rates below 10%.
6. Open a secured credit card
The only surefire way to rebuild credit is to have a recent positive payment history. Of course, if your credit is heavily damaged, you may have trouble qualifying for a credit card with which to build that payment history. This is where a secured credit card can help.
Secured credit cards can be pretty easy to get, even if your credit is damaged. That's because secured cards rely on a cash security deposit to minimize risk to the issuer. If you pay off your balance in full, you'll get the security deposit back when you close your account. Some issuers will even automatically upgrade you to an unsecured account and return your deposit.
7. Become an authorized user on someone else's card
Another way to rebuild credit is to be added as an authorized user on another person's card (such as a trusted family member). When you become an authorized user on someone else's credit card, you receive your own credit card with your name. But the credit account is still the responsibility of the primary account holder. The card company typically reports the credit card account to the credit bureaus for both the primary account holder and the authorized user.
As long as the account is in good standing, being added as an authorized user can help raise your credit score.
Being an authorized user isn't without risks, however. For example, if the cardholder or the authorized user runs up a high balance, both users could see credit score damage. Only tie your credit score to individuals you trust.
8. Build a budget -- and stick to it
Any plan to rebuild credit score damage is sure to fail if you don't address the root of the problem. In many cases, the root cause boils down to the lack of a budget -- and yes, that means a realistic budget, not an idealized one. If your budget doesn't reflect your actual lifestyle, spending, and debts, it'll be useless.
A good budget can help you repay debts and keep from overextending yourself in the future.
RELATED: Best Budgeting Apps
9. Keep an eye on your credit reports and scores
As you work to rebuild credit, be sure to check up on your credit reports and scores regularly. Many credit card issuers offer free monthly credit scores, especially on credit-building products. You can also get free copies of your credit reports from each of the three bureaus once a year through AnnualCreditReport.com.
If you find any errors on your reports, be sure to dispute them quickly with the credit bureaus.
10. Give it time
Like it or not, sometimes time is the only way to rebuild credit. Those delinquent payments and defaulted accounts aren't going anywhere fast.
It can take years of building a positive payment history to recover from big mistakes -- especially when those mistakes can sit on your reports for up to seven years. If you're doing everything right to rebuild your credit, but you're not seeing much movement in your credit scores: be patient. Time -- and keeping on top of your payments -- heals most credit wounds.
How long will it take to rebuild my credit?
Every credit profile is unique. As a result, the best strategy for rebuilding credit will depend on your credit history and the reasons for your credit problems. In other words, the answer to the question of "How long does it take to rebuild credit?" is: It depends.
A low credit score caused by high credit card balances can be the quickest thing to fix (assuming you have the funds to pay them off). Paying down high balances can help you rebuild credit in 30 days or less.
On the other hand, if you need to rebuild credit due to late payments or a defaulted account, you're probably going to need longer. It will take at least six to 12 months to rebuild credit scores to an acceptable level -- and several years for the negative items to disappear altogether.
In fact, negative items can linger on your credit reports for up to 10 years in certain cases (primarily bankruptcy), with most negative items having a shelf life of seven years. On the bright side, negative items impact your credit scores less as they age, particularly when you've been building a positive payment history in the meantime.
How can I raise my credit score by 100 points?
If you have very high credit utilization -- meaning you're close to your credit limits -- paying down your balances could provide a large credit score boost. Credit scores damaged by credit report errors can also jump quite a bit when those errors are removed.
Outside of these situations, however, you'll typically need to rebuild credit over many months to see a gain of 100 points or more. There is no guaranteed way to raise your credit score by a specific amount -- and 100 points is a lot to expect. For example, a 100-point jump from 570 to 670 moves you from bad credit into fair credit.
What type of credit cards work for rebuilding credit?
The best credit cards to rebuild credit have minimal costs and report your payments to the credit bureaus each month. This means cards with affordable annual fees -- or, even better, credit cards with no annual fees -- and the option to make automatic payments.
Wondering where to start? A number of credit cards for fair or average credit won't charge a fee. If you can't qualify for an unsecured card without an annual fee, consider a secured credit card instead.
Secured cards differ from traditional cards in one key way: the deposit. Secured credit cards require an upfront cash deposit to open and maintain. This makes them safer for the issuer. Even if you have significant credit damage, you can likely find an issuer willing to offer you a secured credit card.
Banks where you already have a good reputation and your local credit union are often the best places to find a secured card. Look for one that might allow you to graduate easily to an unsecured card (and avoid annual fees, if possible). |
Answer in 3-5 paragraphs and use ONLY the text provided. | What are the hidden costs of fast fashion? | Fast fashion has revolutionized the fashion industry at a cost to the environment and
human rights. The fast fashion business model relies on the exploitation of resources
and human labor to deliver garments following the latest trends to its consumers at
an unprecedented rate. This quick output of garments demands a sizeable volume of
raw materials fed into the fast fashion industry, creating a significant amount of waste,
pollution and degradation to air, water and wildlife habitat. The pollution introduced
by the fast fashion industry results in devastating impacts to both terrestrial and
aquatic environments, with harmful effects linked to habitat degradation, proliferation
of chemicals and microplastics in waterways, and the increasing impact of climate
change from anthropogenic greenhouse gas emissions.
Despite the increased demand and consumption of fast fashion garments and
people’s apparent growing interest in fashion, they are buying more while wearing
fewer of the items they own. The poor quality of fast fashion clothing contributes to
the limited lifespans of garments, which often end up decomposing slowly in landfills
or being incinerated. In addition to degrading in landfills or being incinerated, fast
fashion clothing has also become a notorious source of microplastics in marine
environments as the cheap, plastic-based materials shed fibers that make their way to
the oceans.
On top of the environmental exploitation that allows for fast fashion’s cheap prices,
the other contributing factor is worker exploitation in low-income countries where
factories are based. Workers — primarily young women — are subjected to hazardous
working conditions while earning unlivable wages, despite the companies pulling in
massive profits.
Although both the fashion industry and consumers have indicated that sustainability
is a priority, fast fashion is an increasingly unsustainable market that continues to
grow, relatively unchecked. And the scale of this industry is enormous: For a company
such as Shein, an estimated 1,000 new styles are uploaded daily — though there has
been speculation that this figure may be a gross underestimate (Zhou, 2022). With the
average number of each garment manufactured ranging from 50-100, according to
the Shein website, this results in a minimum of 50,000 new garments created every
day.
Changing these practices requires drawing attention to the harms of fast fashion and
shifting the narrative from the glamour that has been assigned to overconsumption
toward fashion that embraces sustainability and justice.
AT WHAT COST? 4
Behind the glamour of the fashion industry hides a steep environmental price. The
fashion industry as a whole is responsible for consuming 79 trillion liters of water per
year, producing over 92 million tons of solid waste per year, and contributing up to an
estimated 20% of global wastewater and 10% of CO2 emissions (Niinimaki et al., 2020;
UN Climate Change, 2018).
This output of CO2 exceeds that of the international aviation and shipping industries
combined (UN Climate Change, 2018). Concern continues to rise as, over a span of
roughly 20 years, the number of new garments made per year has nearly doubled and
global consumption of fashion has increased by 400% (World Bank, 2019; Collective
Fashion Justice). If this trend continues, industry greenhouse gas emissions could also
increase significantly, possibly by over 50% by the year 2030 (World Bank, 2019). One of
the most notorious sectors driving these harms has also become one of the fastest
growing: the fast fashion industry.
Fast fashion is an exploitative, growing industry based on the replication and mass
production of garments following current trends — a business model that has
revolutionized the industry, simplifying consumers’ purchasing process and
expediting the turnover of both garments and trends.
This transformation, however, comes at a price. Every day fast fashion companies are
capable of producing a shocking 10,000 new garment styles (Williams, 2022). These
items are produced quickly and with an excess of waste: As much as 15% of the fabric
used during manufacturing is discarded during the garment production process
(Shukla, 2022). Unethical generation of waste has become a pivotal element of
transforming the fashion industry into the polluting behemoth it is today.
In addition to the waste produced during quick manufacturing, businesses are
generating yet more pollution to protect their business models (Lieber, 2018). Brands
at all levels, from Shein to Nike to Burberry, have been found to destroy new,
undamaged products (Mayo, 2021). This has often been carried out by burning, which
introduces additional CO2 and toxic gases on top of the industry’s already large
contribution. For companies like Shein, production costs are so low that returned
items are often destined for landfills because it costs less to simply dispose of items
than put them back into circulation (Williams, 2022).
The low costs set by the fast fashion industry have been praised by some for making
new clothing more accessible to people with lower incomes, yet the largest
consumers of fast fashion include customers of relatively substantial income, while
low-income communities bear the brunt of the industry’s waste and pollution. This
further demonstrates that the goal of this industry is not inclusivity but enormous
AT WHAT COST? 5
INTRODUCTION
profit based on environmental and worker exploitation (Williams, 2022). Fast fashion
has changed society’s perception of what clothing is worth. The enticing low costs in
fast fashion push poorly made garments on people, promoting excess purchasing of
cheap items destined for the landfill rather than the purchasing of higher-quality
garments that will ultimately last longer
Clothing production adversely affects the environment at every stage. Land is cleared
or degraded to produce fossil fuels for fibers, raise animals, or grow commodity crops.
Toxic chemicals are used in processing. Greenhouse gas emissions are produced in
manufacturing and transportation, and waste is generated by factories.
Polyester, a synthetic material obtained from oil, is one of the most widely used fabrics
in the fast fashion industry. It is also one of the most environmentally harmful fabrics.
This material alone was reported to consume 70 million barrels of oil in 2015; the
production of all synthetic fibers uses approximately 342 million barrels of oil each
year (Conca, 2015; Ellen Macarthur Foundation and Circular Fibres Initiative, 2017).
Petrochemicals, in fact, were estimated to be responsible for 62% of global textile
fibers (Textile Exchange, 2021). The extraction of fossil fuels requires destroying
wildlands to develop facilities and drilling sites, affecting the habitability of land and
causing habitat fragmentation, which disrupts essential animal behaviors (The
Wilderness Society, 2021). Producing synthetics also contributes greenhouse gases to
the atmosphere due to their origin in petrochemicals.
Fossil-fuel-based fabrics, however, are not the only materials of concern in the fast
fashion industry. Producing animal-based textiles such as wool involves the breeding
of farmed animals, which often results in widespread habitat loss from deforestation
and grassland conversion to create the necessary room for grazing or to produce feed
(McKinsey & Company 2020). Animal-based fibers used in fast fashion are also
responsible for a large portion of the industry’s massive water consumption. Sheep
bred for wool require significant amounts of water for hydration and feed crops that
frequently rely on additional, chemical-intensive processes (Center for Biological
Diversity, 2021).
The wool industry degrades wildlife habitat, with sheep displacing native wildlife and
eating the vegetation they need. It also produces large amounts of wastewater,
with fecal waste polluting waterways and slaughterhouses expelling additional
AT WHAT COST? 6
wastewater. This water often contains contaminants including pathogens, proteins,
fibers, and contamination from antibiotics and other pharmaceuticals (Center for
Biological Diversity, 2021).
Since 35% to 60% of the weight of shorn wool is contaminated with grease, dirt, feces,
vegetable matter and other impurities, wool must go through a scouring process
using hot water and chemicals before it can be turned into a usable fiber. A typical
wool scour creates an effluent load similar to the sewage from a town of 30,000
people (Center for Biological Diversity, 2021). A more detailed accounting of the full
scope of environmental harms of animal-based textiles such as wool can be found in
Shear Destruction: Wool, Fashion and the Biodiversity Crisis (Center for Biological
Diversity).
Cotton is one of the most widely used materials worldwide due to its versatility and
easy care. But despite only occupying 2.4% of the world’s cropland, cotton uses
tremendous amounts of pesticides; it is responsible for roughly one-fifth of global
insecticide use (McKinsey & Company 2020). This results in serious harm to nontarget
insects such as endangered rusty patched bumble bees and monarch butterflies. On
top of its enormous pesticide use, conventional cotton, which accounts for most
cotton grown, requires a significant amount of water during the growing process. The
cotton used in a single pair of denim jeans requires roughly 10,000 liters of water, an
amount equal to what the average person would drink over the course of ten years
(UN Climate Change, 2018). And the water that runs off cotton fields carries a heavy
pesticide load.
Unlike conventional cotton, organic cotton is not produced with synthetic pesticides.
It’s also estimated that organic cotton production uses 91% less water than
conventional cotton, in large part because genetically engineered crops generally
require more water (Chan, 2019). Organic cotton, however, is seldom used over
conventional cotton in fast fashion due to the heightened costs associated with
production.
Even fibers associated with fewer environmental harms than those reliant on oil
production and animal agriculture can cause severe damage when produced
irresponsibly and at scale to meet the demands of fast fashion. More than 150 million
trees are cut down annually to produce man-made cellulose fibers (Canopy, 2020). Of
the man-made cellulose fibers produced, up to an estimated 30% originate from
primary or endangered forests (McCullough, 2014). Additional habitat loss can result
from the soil degradation or pollution of waterways from chemicals used in
processing or at plantations (McKinsey & Company 2020).
Fast fashion also requires a significant amount of water at the factory level, which
results in roughly 93 billion cubic meters of wastewater just from textile dyeing (Lai,
2021). In low-income countries that produce a large portion of the world’s fast
fashion, such as Bangladesh, the toxic wastewater from textile factories has
historically been dumped directly into rivers or streams to reduce production costs
(Regan, 2020). This action has resulted in bodies of water changing colors from the
AT WHAT COST? 7
dye used or turning black and thick with sludge (Regan, 2020).
This polluted water introduces harms to both marine environments and humans. At
least 72 of the chemicals used in the dyeing process have been identified as toxic
(World Bank, 2014). Once these chemicals accumulate in waterways, they begin to
produce a film on the surface, blocking the entrance of light and preventing
organisms’ abilities to photosynthesize (World Bank, 2014). Reduced ability to
photosynthesize results in lower oxygen levels, or hypoxia, in the water, impacting the
ecosystem’s survivability for aquatic plants and animals. In addition to increased
prevalence of hypoxia in aquatic environments, the presence of certain chemicals
used in the dyeing process can also increase the buildup of heavy metals (World Bank,
2014).
Polluted water is often used to irrigate crops and studies have found textile dyes
present in fruits and vegetables grown around Savar in Bangladesh (Sakamoto et al.,
2019). Areas closer to industrial hubs are disproportionately impacted by the harms of
fast fashion, with costs to livelihoods due to impacted agriculture or fishing, increased
incidence of disease including jaundice or diarrhea, and decreased accessibility to safe
drinking water during the dry season, as contaminated surface water may be unable
to be effectively treated (World Bank, 2014; Ullah et al., 2006).
Pesticides used in the growing of cotton and other crops have also been found to
have harmful effects on biodiversity. The textile industry is estimated to account for
between 10-20% of global pesticide use (McKinsey & Company, 2021).
Organisms can be exposed to chemicals either directly through application or
indirectly through runoff, contamination, or secondary poisoning (Beyond Pesticides).
Exposure to pesticides is linked to a wide array of health concerns in various species
including birds, small mammals, insects, fish and humans. These health concerns
consist of reproductive effects, neurotoxicity, endocrine effects and liver and kidney
damage (Beyond Pesticides). Such harmful effects can occur after minimal exposure,
as reproductive abnormalities have been observed in multiple species following “safe”
levels of exposure as classified by the United States Environmental Protection Agency
(Beyond Pesticides).
The environmental impacts of fast fashion are not limited to the direct impacts from
the manufacturing process. Fast fashion churns out poorly made clothes with limited
lifespans because of the low quality of materials used and the industry thriving off the
constant business from a quick turnover of garments. The quick turnover coupled
with poor quality resulted in 60% of the items manufactured in 2012 being discarded
only a few years after purchase (Shukla, 2022). One survey in Britain found that 1 in 3
young women believed clothes to be “old” following as few as one or two wears
(McKinsey & Company, 2018).
On average consumers are keeping purchased items about half as long as they did at
the turn of the 21st century and purchasing 60% more clothing per year (Remy et
al., 2016). Based on this trend and the low prevalence of clothing recycling, over 50%
AT WHAT COST? 8
AT WHAT COST? 9
of these garments end up in landfills (Shukla, 2022).
In 2018, 11.3 million tons of textiles entered landfills
as municipal solid waste in the United States,
averaging out to roughly 70 pounds of discarded
garments per person (EPA).
Even for the clothing that continues to be worn and
washed, an environmental toll is paid. Synthetic
fabrics release microfibers at alarming rates of
roughly 700,000 fibers per load of laundry, which
often end up in the ocean and other environments
(Ocean Clean Wash, 2019). This adds up to
approximately 500,000 tons of microfibers per year
entering the ocean (Ellen MacArthur Foundation,
2017). An IUCN report estimated that between
15%-31% of plastic pollution in the ocean could
come from household or industrial products
expelling these microplastics, with 35% of that
microplastic coming from the washing of synthetic
fabrics (Boucher and Friot, 2017).
Fibers such as polyester are slow to degrade in the ocean, taking potentially up to 200
years to decompose, then producing toxic substances when they do that pose
dangers for marine ecosystems (Brewer, 2019; Shukla, 2022). Microplastics pose the
additional danger of being consumed by marine organisms, then entering the food
chain and being consumed eventually by humans. For marine organisms that
consume microplastics, impacts may include delayed growth, abnormal behavior, or
reduced intake of food (Li et al., 2021). For humans, microplastics that have made their
way up the food chain pose risks of allergic reactions or cell death (Parker, 2022).
Despite the majority of fiber production being attributed to synthetic fabrics, a 2020
study found that most microfibers were actually from cellulosic and plant-based
fibers, followed by animal fibers (Suaria et al., 2020). While such natural fibers are often
assumed to be biodegradable, modifications made during textile production often
include alterations with chemicals, dyes, or coatings that in turn impact the
biodegradability of the material (Henry et al., 2019). Additional modifications that occur
during manufacturing are seen with wool, where natural fibers are often blended with
synthetics for fast fashion, impacting the biodegradability of the fabric (Center for
Biological Diversity, 2021).
As much of the research on the biodegradability and risks of microfibers is new or still
developing, the problem of microfiber introduction from the fast fashion industry
cannot yet be limited to the impacts from synthetics, as the full scope of risks of all
microfibers is still being realized. This brings the issue of fast fashion back to the
immense scale of production, as there is not one specific fiber to blame for the
environmental degradation but the business model as a whole.
Photo Source: Canva
AT WHAT COST? 10
The introduction of chemicals to the environment is not the only harm associated
with the fast fashion industry. The harsh chemicals used in manufacturing create
potential health hazards for workers and consumers. These risks can be felt in a wide
range of communities, as fast fashion garments are usually produced in low-income
countries but purchased in high-income countries.
At the beginning of the production process, pesticides can cause harm to workers as
they have been linked to acute and chronic health issues including reproductive
disorders, neurological disorders, respiratory conditions, certain cancers and death
(Farmworker Justice, 2013). In garment factories, workers are exposed to occupational
hazards including respiratory harms from chemicals and musculoskeletal harms from
repeated motions (Islam, 2022).
The harmful effects can even be experienced by the consumer of fast fashion.
Garments contain a variety of harmful chemicals including PFAS, azo dyes, phthalates,
and formaldehyde (Fashinnovation, 2022). These chemicals come with risks of
irritation; respiratory, developmental, and reproductive problems; and certain cancers.
On top of that, the spillover of cheaply made fast fashion can also affect the
economies of low-income countries, even if they are not involved directly in the
production of garments.
Every year the United States exports roughly 500,000 tons of secondhand clothing to
low- and middle-income countries that do not always possess the infrastructure to
handle it (Brooks, 2019). Reports from various African communities note how these
imports can decimate local textile businesses, as they are unable to compete with the
competitive costs of these used garments (Brooks, 2019). While this opens a new
market for secondhand clothing, it increases reliance on foreign countries and
suppresses local industries, resulting in a loss of culture and traditional styles (Porter,
2019).
The continuing desire around the world for these garments at low costs also
contributes to the ongoing injustice related to low wages and working conditions in
the low-income countries where most factories are based. In April 2013 the Rana Plaza
building in Dhaka, Bangladesh collapsed, resulting in more than 1,100 textile-worker
fatalities and bringing to light the subpar conditions in which fast fashion industries
operate. Between 2006 and 2012, more than 500 workers in Bangladesh garment
factories died in factory fires, usually due to faulty wiring (Thomas, 2018).
Following these tragic events, the Accord on Fire and Building Safety was signed by
various fast fashion companies, including American Eagle, H&M, and Inditex. This
agreement resulted in 97,000 hazards being repaired in 1,600 factories, and 900
factories being shut down for not meeting compliance standards (Thomas, 2018).
HARMS TO HUMANS
Following the expiration of the Accord in 2018, the 2018 Transition Accord was signed
to extend similar protections until 2021 (Clean Clothes Campaign). Most recently, the
International Accord took effect in September 2021 (International Accord, 2021). This
legally binding agreement promises to ensure factory structural safety for 26 months
by the brands that have signed, which can be found here.
Though a small step toward remedying the worker injustices in the fast fashion
industry, these pacts have yet to address low wages or health hazards associated with
this type of factory work. Beyond historical structure-related tragedies, textile workers
are exposed to various occupational hazards, including respiratory and
musculoskeletal harms (Islam, 2022). Reported health conditions that have been
documented include endocrine damage and reproductive harms, along with
accidental injuries and death (Sant’Ana and Kovalechen, 2012).
These effects are spread disproportionately across genders, as most workers in these
factories are young women (Thomas, 2018). An estimated 80% of global workers in the
garment industry are women, and despite this workplace majority, discrimination,
gender pay gaps, and sexual harassment continue to be reported (Baptist World Aid
Australia, 2019).
While many companies have — or are working to establish — systems to remedy this,
inequalities continue to exist in many of these garment manufacturing environments
(Baptist World Aid Australia, 2019). A reported 9 out of 10 garment workers in
Bangladesh are paid so unfairly for their labor that they cannot afford food for
themselves or their families (Oxfam). Yet to provide workers with a livable wage would
cost some companies as little as an estimated 1% of the retail price of garments
(Oxfam).
The gross injustices occurring within the fast fashion industry stand against the
narrative that fast fashion benefits low-income people. Rather, it exploits workers and
consumers alike.
AT WHAT COST? 11
Photo Source: Rio Lecatompessy - Unsplash
Despite the various claims made by companies showcasing their sustainable efforts
through partial recycling or “conscious” collections, overall efforts are still relatively
low. Even the actions of companies that are following through on their pledges to be
more sustainable are not necessarily having a significant positive impact.
One of the most common recycled materials to substitute the creation of new
synthetics are polyethylene terephthalate (PET) bottles. In a survey of roughly 50
fashion brands, 85% claimed that they were working toward using recycled polyester
sourced from plastic bottles (Circular). Using recycled polyester has the potential
impact of reducing carbon emissions by 32% (Federal Office for the Environment,
2017). But while recycling sounds green in theory, there are several logistical
drawbacks.
Recycling synthetic materials does not fix the emerging problem of microplastics, as
recycled materials will expel just as many fibers as new materials (Bryce, 2021).
Additionally, removing plastic bottles from their established, closed-loop system may
actually harm their overall recyclable potential. These bottles can be recycled at least
10 times in the current system. Feeding them into the fashion industry decreases
their likelihood and potential to be recycled as most garments end up in landfills
(Bryce, 2021). Despite the potential that exists with recycling plastic bottles, the
actual rate at which PET bottles are recycled remains relatively low, with only 29.1%
being recycled in 2018 (EPA). Textile recycling involves a similar shortcoming, as it’s
estimated that less than 1% of textile waste is recycled into new fibers due to
logistical issues including the collecting, sorting, and processing of garments
(McKinsey & Company, 2022).
Many claims made by fast fashion companies hint at sustainability but fall short, and
a lack of transparency contributes to the problem of greenwashing. Greenwashing is
infamous in the fast fashion industry, and multiple companies having had attention
drawn to their misleading claims in the past. Companies like Boohoo, SHEIN, H&M,
ASOS, and Zara have all released claims on their efforts to improve their
sustainability, but there’s little evidence they are realizing those claims (Rauturier,
2022; Igini, 2022).
The popular brand H&M released environmental scorecards informing consumers
about how environmentally friendly their garments were. In an investigation by
Quartz, more than half of the scorecards claimed pieces to be more
environmentally friendly than they actually were, and in some instances the
statements were described as being “the exact opposite of reality” (Quartz, 2022).
The garments included in the controversial claims were those labeled as
“Conscious Choice.” This specific label was described by H&M to mean “pieces
AT WHAT COST? 12
GREENWASHING
While many companies have environmentally harmful business models, there are
others that are taking a more meaningful approach to sustainability. These companies
are actively encouraging people to extend the life of their clothing, providing
customers with the resources to do so, and using data to back up their sustainability
claims. These claims have been published by the companies and their accuracies have
not been evaluated by this report.
Levi’s, for example, urges customers to wash their jeans less: after about 10 wears. This
not only lengthens the lifespan of jeans but saves water from washing machines and
reduces the expelling of microfibers in the wash. Data published on Levi’s website
states that taking care of your jeans and wearing them for 10 months or longer will
reduce their carbon footprint by 18% and water footprint by 23%.
Levi’s also offers solutions for old or damaged clothing, like opening Levi’s Tailor Shops
where clothes can be altered or repaired, offering tutorials on how to perform various
DIY projects on jeans, and suggesting that you donate unwanted clothing to
secondhand shops or pass items along as hand-me-downs.
Other ways that brands are trying to lessen the waste in fashion is through product
guarantees and resale initiatives. Patagonia includes a guarantee that if clothing
develops damage due to wear, the company will repair it at a “reasonable charge.”
Like Levi’s, Patagonia offers DIY repair guides to extend the life of products. It also
hosts Worn Wear, a site where you can trade in used clothing so it can be washed and
resold, lengthening the garment’s lifespan. As an incentive, trading in a garment will
get you credit that can be used to purchase new or used from the brand. Worn Wear
also has the additional bonus that the used articles are sold at a reduced cost
compared to new items. This increases accessibility of quality, long-lasting products to
individuals who might not be able to afford them otherwise and resort to fast fashion
for financial reasons.
A PUSH TOWARD SUSTAINABILITY
AT WHAT COST? 13
created with a little extra consideration for the planet,” with products containing at
least 50% of “more sustainable materials” (H&M). These vaguely defined “eco-friendly”
labels are another popular industry greenwashing technique. But simultaneously
producing and promoting the purchase of billions of garments per year, many of
which get discarded and replaced quickly, reduces the potential positive impacts of
so-called “conscious collections” and falsely reassures consumers.
A different approach can be seen with MUD Jeans, which in 2013 introduced a
program called Lease a Jeans, where customers can pay a monthly fee to lease jeans
for a year, after which the payments stop and the customer can either keep the jeans
or return them to be recycled. In 2021, 11,512 pairs of jeans were recycled, with a
donation to plant one tree with the nonprofit Justdiggit with every pair. By promoting
a circular economy through jeans recycling, MUD Jeans states, it’s producing no
additional end-of-life waste for those articles and using 92% less water than the
average jeans.
In addition to creative solutions to extend the lifespans of garments and reduce waste,
efforts are being made by some companies to use more sustainable materials and
manufacturing processes. For plant-based fibers like cotton, organic and recycled
materials tend to be more sustainable than conventional and virgin materials,
respectively.
To grow cotton — one of the most commonly used fabrics in the world — a substantial
amount of pesticides are conventionally used. Certified organic cotton, especially
grown in countries like the United States that have strict organic standards, does not
contain the dangerous pesticide load of conventional cotton. And recycled cotton
does not take any additional pesticides to produce, reduces water consumption, and
prevents garments from being sent to landfills.
Flax (linen) and hemp are two additional, versatile crops that can be used for textiles.
Both are relatively environmentally friendly alternatives as they require minimal water
and are often grown with little to no pesticides. Hemp grows so densely that it can
reduce competition, and it also naturally deters pests (Hymann, 2020). Linen uses less
water and fewer pesticides than conventional cotton and has the benefit that the
plant it’s derived from is typically used in its entirety, reducing overall waste during
production (Newman, 2020). Linen’s natural hues come in a variety of colors including
ivory, tan, and grays, reducing the amount of dyes necessary (Newman, 2020). When
untreated, linen is entirely biodegradable.
In a push for more sustainable options, new materials are being derived from various
types of plants. Bananatex is a relatively new fabric made from Abacá banana plants
that is fully biodegradable and circular. This plant has many environmental
advantages, including that it does not require the use of pesticides, fertilizers, or
additional water (Bananatex). These characteristics have helped to contribute to
reforestation in certain areas, strengthening biodiversity (Bananatex).
On top of using more sustainable fabrics, environmentally conscientious companies
are taking additional steps to reduce waste in their supply chains. Efforts include
using recycled, plastic-free, or compostable packaging, using less harmful chemicals,
and getting energy from cleaner sources such as solar power. While there is room for
additional reform in the fashion industry, a few examples of brands working towards
more sustainable practices can be seen here.
Necessary reform of the fast fashion industry must involve voices from all levels. This
AT WHAT COST? 14
includes individuals pushing for change, governments enacting policies that can
oversee change, and companies committing to make the change. Fast fashion
companies need to be held accountable for their destructive practices, including the
waste they produce and the worker injustice that their business models are built
around. Companies’ flimsy claims of future reform are no longer enough.
Policy efforts to improve the fashion industry have involved the health and safety of
garment workers, unfair wages, and transparency of environmental impacts. U.S.
policies of note include The Fashioning Accountability and Building Real Institutional
Change (FABRIC) Act, The Fashion and Sustainability and Social Accountability Act,
and the SWEAT Bill.
The FABRIC Act is a federal bill that was introduced in May 2022. This legislature would
protect nearly 100,000 American garment workers, improving working conditions and
wages, revitalizing the U.S. garment industry and investing in domestic apparel
production (The FABRIC Act).
The Fashion and Sustainability and Social Accountability Act was referred to the
Consumer Protection Committee in early 2022 and requires fashion manufacturers
and retail sellers to disclose environmental policies along with social due diligence
policies. This state bill would also establish a community benefit fund that would help
implement projects that directly benefit environmental justice communities (New
York Senate).
The SWEAT Bill passed assembly in March 2022. This state bill involves ensuring the
payment of wages for work that was already performed. It also “creates a lien remedy
for all employees; provides grounds for attachment; relates to procedures where
employees may hold shareholders of non-publicly traded corporations personally
liable for wage theft; relates to rights for victims of wage theft to hold the ten
members with the largest ownership interests in a company personally liable for wage
theft” (New York Senate).
If companies are required or incentivized to pursue more sustainable practices, the
scale of destruction caused by the fashion industry could be significantly lessened.
Additional work that could help to reform the fashion industry includes making
sustainable fashion more affordable, so people of limited means are not forced to buy
fast fashion, along with making fast fashion companies internalize the environmental
costs of their production and waste.
| Answer in 3-5 paragraphs and use ONLY the text provided.
What are the hidden costs of fast fashion?
Fast fashion has revolutionized the fashion industry at a cost to the environment and
human rights. The fast fashion business model relies on the exploitation of resources
and human labor to deliver garments following the latest trends to its consumers at
an unprecedented rate. This quick output of garments demands a sizeable volume of
raw materials fed into the fast fashion industry, creating a significant amount of waste,
pollution and degradation to air, water and wildlife habitat. The pollution introduced
by the fast fashion industry results in devastating impacts to both terrestrial and
aquatic environments, with harmful effects linked to habitat degradation, proliferation
of chemicals and microplastics in waterways, and the increasing impact of climate
change from anthropogenic greenhouse gas emissions.
Despite the increased demand and consumption of fast fashion garments and
people’s apparent growing interest in fashion, they are buying more while wearing
fewer of the items they own. The poor quality of fast fashion clothing contributes to
the limited lifespans of garments, which often end up decomposing slowly in landfills
or being incinerated. In addition to degrading in landfills or being incinerated, fast
fashion clothing has also become a notorious source of microplastics in marine
environments as the cheap, plastic-based materials shed fibers that make their way to
the oceans.
On top of the environmental exploitation that allows for fast fashion’s cheap prices,
the other contributing factor is worker exploitation in low-income countries where
factories are based. Workers — primarily young women — are subjected to hazardous
working conditions while earning unlivable wages, despite the companies pulling in
massive profits.
Although both the fashion industry and consumers have indicated that sustainability
is a priority, fast fashion is an increasingly unsustainable market that continues to
grow, relatively unchecked. And the scale of this industry is enormous: For a company
such as Shein, an estimated 1,000 new styles are uploaded daily — though there has
been speculation that this figure may be a gross underestimate (Zhou, 2022). With the
average number of each garment manufactured ranging from 50-100, according to
the Shein website, this results in a minimum of 50,000 new garments created every
day.
Changing these practices requires drawing attention to the harms of fast fashion and
shifting the narrative from the glamour that has been assigned to overconsumption
toward fashion that embraces sustainability and justice.
AT WHAT COST? 4
Behind the glamour of the fashion industry hides a steep environmental price. The
fashion industry as a whole is responsible for consuming 79 trillion liters of water per
year, producing over 92 million tons of solid waste per year, and contributing up to an
estimated 20% of global wastewater and 10% of CO2 emissions (Niinimaki et al., 2020;
UN Climate Change, 2018).
This output of CO2 exceeds that of the international aviation and shipping industries
combined (UN Climate Change, 2018). Concern continues to rise as, over a span of
roughly 20 years, the number of new garments made per year has nearly doubled and
global consumption of fashion has increased by 400% (World Bank, 2019; Collective
Fashion Justice). If this trend continues, industry greenhouse gas emissions could also
increase significantly, possibly by over 50% by the year 2030 (World Bank, 2019). One of
the most notorious sectors driving these harms has also become one of the fastest
growing: the fast fashion industry.
Fast fashion is an exploitative, growing industry based on the replication and mass
production of garments following current trends — a business model that has
revolutionized the industry, simplifying consumers’ purchasing process and
expediting the turnover of both garments and trends.
This transformation, however, comes at a price. Every day fast fashion companies are
capable of producing a shocking 10,000 new garment styles (Williams, 2022). These
items are produced quickly and with an excess of waste: As much as 15% of the fabric
used during manufacturing is discarded during the garment production process
(Shukla, 2022). Unethical generation of waste has become a pivotal element of
transforming the fashion industry into the polluting behemoth it is today.
In addition to the waste produced during quick manufacturing, businesses are
generating yet more pollution to protect their business models (Lieber, 2018). Brands
at all levels, from Shein to Nike to Burberry, have been found to destroy new,
undamaged products (Mayo, 2021). This has often been carried out by burning, which
introduces additional CO2 and toxic gases on top of the industry’s already large
contribution. For companies like Shein, production costs are so low that returned
items are often destined for landfills because it costs less to simply dispose of items
than put them back into circulation (Williams, 2022).
The low costs set by the fast fashion industry have been praised by some for making
new clothing more accessible to people with lower incomes, yet the largest
consumers of fast fashion include customers of relatively substantial income, while
low-income communities bear the brunt of the industry’s waste and pollution. This
further demonstrates that the goal of this industry is not inclusivity but enormous
AT WHAT COST? 5
INTRODUCTION
profit based on environmental and worker exploitation (Williams, 2022). Fast fashion
has changed society’s perception of what clothing is worth. The enticing low costs in
fast fashion push poorly made garments on people, promoting excess purchasing of
cheap items destined for the landfill rather than the purchasing of higher-quality
garments that will ultimately last longer
Clothing production adversely affects the environment at every stage. Land is cleared
or degraded to produce fossil fuels for fibers, raise animals, or grow commodity crops.
Toxic chemicals are used in processing. Greenhouse gas emissions are produced in
manufacturing and transportation, and waste is generated by factories.
Polyester, a synthetic material obtained from oil, is one of the most widely used fabrics
in the fast fashion industry. It is also one of the most environmentally harmful fabrics.
This material alone was reported to consume 70 million barrels of oil in 2015; the
production of all synthetic fibers uses approximately 342 million barrels of oil each
year (Conca, 2015; Ellen Macarthur Foundation and Circular Fibres Initiative, 2017).
Petrochemicals, in fact, were estimated to be responsible for 62% of global textile
fibers (Textile Exchange, 2021). The extraction of fossil fuels requires destroying
wildlands to develop facilities and drilling sites, affecting the habitability of land and
causing habitat fragmentation, which disrupts essential animal behaviors (The
Wilderness Society, 2021). Producing synthetics also contributes greenhouse gases to
the atmosphere due to their origin in petrochemicals.
Fossil-fuel-based fabrics, however, are not the only materials of concern in the fast
fashion industry. Producing animal-based textiles such as wool involves the breeding
of farmed animals, which often results in widespread habitat loss from deforestation
and grassland conversion to create the necessary room for grazing or to produce feed
(McKinsey & Company 2020). Animal-based fibers used in fast fashion are also
responsible for a large portion of the industry’s massive water consumption. Sheep
bred for wool require significant amounts of water for hydration and feed crops that
frequently rely on additional, chemical-intensive processes (Center for Biological
Diversity, 2021).
The wool industry degrades wildlife habitat, with sheep displacing native wildlife and
eating the vegetation they need. It also produces large amounts of wastewater,
with fecal waste polluting waterways and slaughterhouses expelling additional
AT WHAT COST? 6
wastewater. This water often contains contaminants including pathogens, proteins,
fibers, and contamination from antibiotics and other pharmaceuticals (Center for
Biological Diversity, 2021).
Since 35% to 60% of the weight of shorn wool is contaminated with grease, dirt, feces,
vegetable matter and other impurities, wool must go through a scouring process
using hot water and chemicals before it can be turned into a usable fiber. A typical
wool scour creates an effluent load similar to the sewage from a town of 30,000
people (Center for Biological Diversity, 2021). A more detailed accounting of the full
scope of environmental harms of animal-based textiles such as wool can be found in
Shear Destruction: Wool, Fashion and the Biodiversity Crisis (Center for Biological
Diversity).
Cotton is one of the most widely used materials worldwide due to its versatility and
easy care. But despite only occupying 2.4% of the world’s cropland, cotton uses
tremendous amounts of pesticides; it is responsible for roughly one-fifth of global
insecticide use (McKinsey & Company 2020). This results in serious harm to nontarget
insects such as endangered rusty patched bumble bees and monarch butterflies. On
top of its enormous pesticide use, conventional cotton, which accounts for most
cotton grown, requires a significant amount of water during the growing process. The
cotton used in a single pair of denim jeans requires roughly 10,000 liters of water, an
amount equal to what the average person would drink over the course of ten years
(UN Climate Change, 2018). And the water that runs off cotton fields carries a heavy
pesticide load.
Unlike conventional cotton, organic cotton is not produced with synthetic pesticides.
It’s also estimated that organic cotton production uses 91% less water than
conventional cotton, in large part because genetically engineered crops generally
require more water (Chan, 2019). Organic cotton, however, is seldom used over
conventional cotton in fast fashion due to the heightened costs associated with
production.
Even fibers associated with fewer environmental harms than those reliant on oil
production and animal agriculture can cause severe damage when produced
irresponsibly and at scale to meet the demands of fast fashion. More than 150 million
trees are cut down annually to produce man-made cellulose fibers (Canopy, 2020). Of
the man-made cellulose fibers produced, up to an estimated 30% originate from
primary or endangered forests (McCullough, 2014). Additional habitat loss can result
from the soil degradation or pollution of waterways from chemicals used in
processing or at plantations (McKinsey & Company 2020).
Fast fashion also requires a significant amount of water at the factory level, which
results in roughly 93 billion cubic meters of wastewater just from textile dyeing (Lai,
2021). In low-income countries that produce a large portion of the world’s fast
fashion, such as Bangladesh, the toxic wastewater from textile factories has
historically been dumped directly into rivers or streams to reduce production costs
(Regan, 2020). This action has resulted in bodies of water changing colors from the
AT WHAT COST? 7
dye used or turning black and thick with sludge (Regan, 2020).
This polluted water introduces harms to both marine environments and humans. At
least 72 of the chemicals used in the dyeing process have been identified as toxic
(World Bank, 2014). Once these chemicals accumulate in waterways, they begin to
produce a film on the surface, blocking the entrance of light and preventing
organisms’ abilities to photosynthesize (World Bank, 2014). Reduced ability to
photosynthesize results in lower oxygen levels, or hypoxia, in the water, impacting the
ecosystem’s survivability for aquatic plants and animals. In addition to increased
prevalence of hypoxia in aquatic environments, the presence of certain chemicals
used in the dyeing process can also increase the buildup of heavy metals (World Bank,
2014).
Polluted water is often used to irrigate crops and studies have found textile dyes
present in fruits and vegetables grown around Savar in Bangladesh (Sakamoto et al.,
2019). Areas closer to industrial hubs are disproportionately impacted by the harms of
fast fashion, with costs to livelihoods due to impacted agriculture or fishing, increased
incidence of disease including jaundice or diarrhea, and decreased accessibility to safe
drinking water during the dry season, as contaminated surface water may be unable
to be effectively treated (World Bank, 2014; Ullah et al., 2006).
Pesticides used in the growing of cotton and other crops have also been found to
have harmful effects on biodiversity. The textile industry is estimated to account for
between 10-20% of global pesticide use (McKinsey & Company, 2021).
Organisms can be exposed to chemicals either directly through application or
indirectly through runoff, contamination, or secondary poisoning (Beyond Pesticides).
Exposure to pesticides is linked to a wide array of health concerns in various species
including birds, small mammals, insects, fish and humans. These health concerns
consist of reproductive effects, neurotoxicity, endocrine effects and liver and kidney
damage (Beyond Pesticides). Such harmful effects can occur after minimal exposure,
as reproductive abnormalities have been observed in multiple species following “safe”
levels of exposure as classified by the United States Environmental Protection Agency
(Beyond Pesticides).
The environmental impacts of fast fashion are not limited to the direct impacts from
the manufacturing process. Fast fashion churns out poorly made clothes with limited
lifespans because of the low quality of materials used and the industry thriving off the
constant business from a quick turnover of garments. The quick turnover coupled
with poor quality resulted in 60% of the items manufactured in 2012 being discarded
only a few years after purchase (Shukla, 2022). One survey in Britain found that 1 in 3
young women believed clothes to be “old” following as few as one or two wears
(McKinsey & Company, 2018).
On average consumers are keeping purchased items about half as long as they did at
the turn of the 21st century and purchasing 60% more clothing per year (Remy et
al., 2016). Based on this trend and the low prevalence of clothing recycling, over 50%
AT WHAT COST? 8
AT WHAT COST? 9
of these garments end up in landfills (Shukla, 2022).
In 2018, 11.3 million tons of textiles entered landfills
as municipal solid waste in the United States,
averaging out to roughly 70 pounds of discarded
garments per person (EPA).
Even for the clothing that continues to be worn and
washed, an environmental toll is paid. Synthetic
fabrics release microfibers at alarming rates of
roughly 700,000 fibers per load of laundry, which
often end up in the ocean and other environments
(Ocean Clean Wash, 2019). This adds up to
approximately 500,000 tons of microfibers per year
entering the ocean (Ellen MacArthur Foundation,
2017). An IUCN report estimated that between
15%-31% of plastic pollution in the ocean could
come from household or industrial products
expelling these microplastics, with 35% of that
microplastic coming from the washing of synthetic
fabrics (Boucher and Friot, 2017).
Fibers such as polyester are slow to degrade in the ocean, taking potentially up to 200
years to decompose, then producing toxic substances when they do that pose
dangers for marine ecosystems (Brewer, 2019; Shukla, 2022). Microplastics pose the
additional danger of being consumed by marine organisms, then entering the food
chain and being consumed eventually by humans. For marine organisms that
consume microplastics, impacts may include delayed growth, abnormal behavior, or
reduced intake of food (Li et al., 2021). For humans, microplastics that have made their
way up the food chain pose risks of allergic reactions or cell death (Parker, 2022).
Despite the majority of fiber production being attributed to synthetic fabrics, a 2020
study found that most microfibers were actually from cellulosic and plant-based
fibers, followed by animal fibers (Suaria et al., 2020). While such natural fibers are often
assumed to be biodegradable, modifications made during textile production often
include alterations with chemicals, dyes, or coatings that in turn impact the
biodegradability of the material (Henry et al., 2019). Additional modifications that occur
during manufacturing are seen with wool, where natural fibers are often blended with
synthetics for fast fashion, impacting the biodegradability of the fabric (Center for
Biological Diversity, 2021).
As much of the research on the biodegradability and risks of microfibers is new or still
developing, the problem of microfiber introduction from the fast fashion industry
cannot yet be limited to the impacts from synthetics, as the full scope of risks of all
microfibers is still being realized. This brings the issue of fast fashion back to the
immense scale of production, as there is not one specific fiber to blame for the
environmental degradation but the business model as a whole.
Photo Source: Canva
AT WHAT COST? 10
The introduction of chemicals to the environment is not the only harm associated
with the fast fashion industry. The harsh chemicals used in manufacturing create
potential health hazards for workers and consumers. These risks can be felt in a wide
range of communities, as fast fashion garments are usually produced in low-income
countries but purchased in high-income countries.
At the beginning of the production process, pesticides can cause harm to workers as
they have been linked to acute and chronic health issues including reproductive
disorders, neurological disorders, respiratory conditions, certain cancers and death
(Farmworker Justice, 2013). In garment factories, workers are exposed to occupational
hazards including respiratory harms from chemicals and musculoskeletal harms from
repeated motions (Islam, 2022).
The harmful effects can even be experienced by the consumer of fast fashion.
Garments contain a variety of harmful chemicals including PFAS, azo dyes, phthalates,
and formaldehyde (Fashinnovation, 2022). These chemicals come with risks of
irritation; respiratory, developmental, and reproductive problems; and certain cancers.
On top of that, the spillover of cheaply made fast fashion can also affect the
economies of low-income countries, even if they are not involved directly in the
production of garments.
Every year the United States exports roughly 500,000 tons of secondhand clothing to
low- and middle-income countries that do not always possess the infrastructure to
handle it (Brooks, 2019). Reports from various African communities note how these
imports can decimate local textile businesses, as they are unable to compete with the
competitive costs of these used garments (Brooks, 2019). While this opens a new
market for secondhand clothing, it increases reliance on foreign countries and
suppresses local industries, resulting in a loss of culture and traditional styles (Porter,
2019).
The continuing desire around the world for these garments at low costs also
contributes to the ongoing injustice related to low wages and working conditions in
the low-income countries where most factories are based. In April 2013 the Rana Plaza
building in Dhaka, Bangladesh collapsed, resulting in more than 1,100 textile-worker
fatalities and bringing to light the subpar conditions in which fast fashion industries
operate. Between 2006 and 2012, more than 500 workers in Bangladesh garment
factories died in factory fires, usually due to faulty wiring (Thomas, 2018).
Following these tragic events, the Accord on Fire and Building Safety was signed by
various fast fashion companies, including American Eagle, H&M, and Inditex. This
agreement resulted in 97,000 hazards being repaired in 1,600 factories, and 900
factories being shut down for not meeting compliance standards (Thomas, 2018).
HARMS TO HUMANS
Following the expiration of the Accord in 2018, the 2018 Transition Accord was signed
to extend similar protections until 2021 (Clean Clothes Campaign). Most recently, the
International Accord took effect in September 2021 (International Accord, 2021). This
legally binding agreement promises to ensure factory structural safety for 26 months
by the brands that have signed, which can be found here.
Though a small step toward remedying the worker injustices in the fast fashion
industry, these pacts have yet to address low wages or health hazards associated with
this type of factory work. Beyond historical structure-related tragedies, textile workers
are exposed to various occupational hazards, including respiratory and
musculoskeletal harms (Islam, 2022). Reported health conditions that have been
documented include endocrine damage and reproductive harms, along with
accidental injuries and death (Sant’Ana and Kovalechen, 2012).
These effects are spread disproportionately across genders, as most workers in these
factories are young women (Thomas, 2018). An estimated 80% of global workers in the
garment industry are women, and despite this workplace majority, discrimination,
gender pay gaps, and sexual harassment continue to be reported (Baptist World Aid
Australia, 2019).
While many companies have — or are working to establish — systems to remedy this,
inequalities continue to exist in many of these garment manufacturing environments
(Baptist World Aid Australia, 2019). A reported 9 out of 10 garment workers in
Bangladesh are paid so unfairly for their labor that they cannot afford food for
themselves or their families (Oxfam). Yet to provide workers with a livable wage would
cost some companies as little as an estimated 1% of the retail price of garments
(Oxfam).
The gross injustices occurring within the fast fashion industry stand against the
narrative that fast fashion benefits low-income people. Rather, it exploits workers and
consumers alike.
AT WHAT COST? 11
Photo Source: Rio Lecatompessy - Unsplash
Despite the various claims made by companies showcasing their sustainable efforts
through partial recycling or “conscious” collections, overall efforts are still relatively
low. Even the actions of companies that are following through on their pledges to be
more sustainable are not necessarily having a significant positive impact.
One of the most common recycled materials to substitute the creation of new
synthetics are polyethylene terephthalate (PET) bottles. In a survey of roughly 50
fashion brands, 85% claimed that they were working toward using recycled polyester
sourced from plastic bottles (Circular). Using recycled polyester has the potential
impact of reducing carbon emissions by 32% (Federal Office for the Environment,
2017). But while recycling sounds green in theory, there are several logistical
drawbacks.
Recycling synthetic materials does not fix the emerging problem of microplastics, as
recycled materials will expel just as many fibers as new materials (Bryce, 2021).
Additionally, removing plastic bottles from their established, closed-loop system may
actually harm their overall recyclable potential. These bottles can be recycled at least
10 times in the current system. Feeding them into the fashion industry decreases
their likelihood and potential to be recycled as most garments end up in landfills
(Bryce, 2021). Despite the potential that exists with recycling plastic bottles, the
actual rate at which PET bottles are recycled remains relatively low, with only 29.1%
being recycled in 2018 (EPA). Textile recycling involves a similar shortcoming, as it’s
estimated that less than 1% of textile waste is recycled into new fibers due to
logistical issues including the collecting, sorting, and processing of garments
(McKinsey & Company, 2022).
Many claims made by fast fashion companies hint at sustainability but fall short, and
a lack of transparency contributes to the problem of greenwashing. Greenwashing is
infamous in the fast fashion industry, and multiple companies having had attention
drawn to their misleading claims in the past. Companies like Boohoo, SHEIN, H&M,
ASOS, and Zara have all released claims on their efforts to improve their
sustainability, but there’s little evidence they are realizing those claims (Rauturier,
2022; Igini, 2022).
The popular brand H&M released environmental scorecards informing consumers
about how environmentally friendly their garments were. In an investigation by
Quartz, more than half of the scorecards claimed pieces to be more
environmentally friendly than they actually were, and in some instances the
statements were described as being “the exact opposite of reality” (Quartz, 2022).
The garments included in the controversial claims were those labeled as
“Conscious Choice.” This specific label was described by H&M to mean “pieces
AT WHAT COST? 12
GREENWASHING
While many companies have environmentally harmful business models, there are
others that are taking a more meaningful approach to sustainability. These companies
are actively encouraging people to extend the life of their clothing, providing
customers with the resources to do so, and using data to back up their sustainability
claims. These claims have been published by the companies and their accuracies have
not been evaluated by this report.
Levi’s, for example, urges customers to wash their jeans less: after about 10 wears. This
not only lengthens the lifespan of jeans but saves water from washing machines and
reduces the expelling of microfibers in the wash. Data published on Levi’s website
states that taking care of your jeans and wearing them for 10 months or longer will
reduce their carbon footprint by 18% and water footprint by 23%.
Levi’s also offers solutions for old or damaged clothing, like opening Levi’s Tailor Shops
where clothes can be altered or repaired, offering tutorials on how to perform various
DIY projects on jeans, and suggesting that you donate unwanted clothing to
secondhand shops or pass items along as hand-me-downs.
Other ways that brands are trying to lessen the waste in fashion is through product
guarantees and resale initiatives. Patagonia includes a guarantee that if clothing
develops damage due to wear, the company will repair it at a “reasonable charge.”
Like Levi’s, Patagonia offers DIY repair guides to extend the life of products. It also
hosts Worn Wear, a site where you can trade in used clothing so it can be washed and
resold, lengthening the garment’s lifespan. As an incentive, trading in a garment will
get you credit that can be used to purchase new or used from the brand. Worn Wear
also has the additional bonus that the used articles are sold at a reduced cost
compared to new items. This increases accessibility of quality, long-lasting products to
individuals who might not be able to afford them otherwise and resort to fast fashion
for financial reasons.
A PUSH TOWARD SUSTAINABILITY
AT WHAT COST? 13
created with a little extra consideration for the planet,” with products containing at
least 50% of “more sustainable materials” (H&M). These vaguely defined “eco-friendly”
labels are another popular industry greenwashing technique. But simultaneously
producing and promoting the purchase of billions of garments per year, many of
which get discarded and replaced quickly, reduces the potential positive impacts of
so-called “conscious collections” and falsely reassures consumers.
A different approach can be seen with MUD Jeans, which in 2013 introduced a
program called Lease a Jeans, where customers can pay a monthly fee to lease jeans
for a year, after which the payments stop and the customer can either keep the jeans
or return them to be recycled. In 2021, 11,512 pairs of jeans were recycled, with a
donation to plant one tree with the nonprofit Justdiggit with every pair. By promoting
a circular economy through jeans recycling, MUD Jeans states, it’s producing no
additional end-of-life waste for those articles and using 92% less water than the
average jeans.
In addition to creative solutions to extend the lifespans of garments and reduce waste,
efforts are being made by some companies to use more sustainable materials and
manufacturing processes. For plant-based fibers like cotton, organic and recycled
materials tend to be more sustainable than conventional and virgin materials,
respectively.
To grow cotton — one of the most commonly used fabrics in the world — a substantial
amount of pesticides are conventionally used. Certified organic cotton, especially
grown in countries like the United States that have strict organic standards, does not
contain the dangerous pesticide load of conventional cotton. And recycled cotton
does not take any additional pesticides to produce, reduces water consumption, and
prevents garments from being sent to landfills.
Flax (linen) and hemp are two additional, versatile crops that can be used for textiles.
Both are relatively environmentally friendly alternatives as they require minimal water
and are often grown with little to no pesticides. Hemp grows so densely that it can
reduce competition, and it also naturally deters pests (Hymann, 2020). Linen uses less
water and fewer pesticides than conventional cotton and has the benefit that the
plant it’s derived from is typically used in its entirety, reducing overall waste during
production (Newman, 2020). Linen’s natural hues come in a variety of colors including
ivory, tan, and grays, reducing the amount of dyes necessary (Newman, 2020). When
untreated, linen is entirely biodegradable.
In a push for more sustainable options, new materials are being derived from various
types of plants. Bananatex is a relatively new fabric made from Abacá banana plants
that is fully biodegradable and circular. This plant has many environmental
advantages, including that it does not require the use of pesticides, fertilizers, or
additional water (Bananatex). These characteristics have helped to contribute to
reforestation in certain areas, strengthening biodiversity (Bananatex).
On top of using more sustainable fabrics, environmentally conscientious companies
are taking additional steps to reduce waste in their supply chains. Efforts include
using recycled, plastic-free, or compostable packaging, using less harmful chemicals,
and getting energy from cleaner sources such as solar power. While there is room for
additional reform in the fashion industry, a few examples of brands working towards
more sustainable practices can be seen here.
Necessary reform of the fast fashion industry must involve voices from all levels. This
AT WHAT COST? 14
includes individuals pushing for change, governments enacting policies that can
oversee change, and companies committing to make the change. Fast fashion
companies need to be held accountable for their destructive practices, including the
waste they produce and the worker injustice that their business models are built
around. Companies’ flimsy claims of future reform are no longer enough.
Policy efforts to improve the fashion industry have involved the health and safety of
garment workers, unfair wages, and transparency of environmental impacts. U.S.
policies of note include The Fashioning Accountability and Building Real Institutional
Change (FABRIC) Act, The Fashion and Sustainability and Social Accountability Act,
and the SWEAT Bill.
The FABRIC Act is a federal bill that was introduced in May 2022. This legislature would
protect nearly 100,000 American garment workers, improving working conditions and
wages, revitalizing the U.S. garment industry and investing in domestic apparel
production (The FABRIC Act).
The Fashion and Sustainability and Social Accountability Act was referred to the
Consumer Protection Committee in early 2022 and requires fashion manufacturers
and retail sellers to disclose environmental policies along with social due diligence
policies. This state bill would also establish a community benefit fund that would help
implement projects that directly benefit environmental justice communities (New
York Senate).
The SWEAT Bill passed assembly in March 2022. This state bill involves ensuring the
payment of wages for work that was already performed. It also “creates a lien remedy
for all employees; provides grounds for attachment; relates to procedures where
employees may hold shareholders of non-publicly traded corporations personally
liable for wage theft; relates to rights for victims of wage theft to hold the ten
members with the largest ownership interests in a company personally liable for wage
theft” (New York Senate).
If companies are required or incentivized to pursue more sustainable practices, the
scale of destruction caused by the fashion industry could be significantly lessened.
Additional work that could help to reform the fashion industry includes making
sustainable fashion more affordable, so people of limited means are not forced to buy
fast fashion, along with making fast fashion companies internalize the environmental
costs of their production and waste.
|
Any information that you draw to answer any questions must come only from the information found in the prompt. Under no circumstances are you allowed rely on any information from any source other than the information in the prompt. If the answer requires a series of steps, list them in a numbered list format. | How many beeps would be heard if a user wants to activate right-handed operation, increase the cursor speed to 2, activate double click, and turn the buzzer off on a new device? | There are a number of settings to allow you to configure OPTIMA Joystick to your exact requirements. These are all programmed using Learn Mode and are stored in an internal, non-volatile memory so they are automatically recalled each time you use the unit, even if you swap computers.
To make changes to the settings, you must first go into Learn Mode. Press and hold the middle button until a warbling tone is heard. The unit is now in Learn Mode and is able to accept changes to the settings, as follows:
Learn Mode
Features
• Plug and Play USB and PS/2 operation and requires no drivers.
• PC, Mac and Chromebook compatible.
• Switchable to Gaming output for full compatibility
with Xbox Adaptive Controller
• Light touch joystick movement.
• User-selectable cursor speed settings.
• Drag lock and double click features.
• Sockets to operate left and right click from remote switches.
• Robust construction and ergonomic design.
• Industry-standard mounting option.
• Optional left-handed operation.
Cursor Speed
To change the speed setting while in Learn Mode, press the middle button briefly. Each time you do so, the unit emits a number of beeps, between 1 and 4. One beep indicates the lowest speed and 4 the highest. The speed of the cursor changes immediately, allowing you to experiment until the best setting is found.
Left-Handed Operation
The left and right buttons may be swapped around, which is particularly useful for left-landed users. To change this setting, press the left button while in Learn Mode. One beep indicates the unit is set to standard ‘right-handed’ mode, whereas two beeps indicates ‘left-handed’ operation.
Double Click
Right-click may be substituted with Double-Click, which is useful for users who have difficulty in double-clicking quickly enough for the computer to recognise. To change this setting, press the right button briefly while in Learn Mode. One beep indicates the unit is set to standard ‘right-click’ mode, whereas two beeps indicates ‘Double-Click’ operation.
Buzzer On/Off
OPTIMA Joystick is fitted with a buzzer which gives an audible indication of operations such as drag lock and unlock, double-click, entering Learn Mode etc. When OPTIMA Joystick is used in a classroom setting, where there may be many units in close proximity, it may be beneficial to turn off the buzzer. To achieve this, press and hold the right button while in Learn Mode, until two long beeps are heard. The buzzer is now disabled, although it will still operate while in Learn Mode. Repeating the above operation will re-enable it.
All of the above settings may be changed as often as required while in Learn Mode, allowing you to experiment with the settings until the best configuration is found. Once you are happy with the settings, they may be stored in the non-volatile memory by pressing and holding the middle button once again, until the warbling tone is heard. Normal operation then resumes. Note that if both left-handed operation and Double-Click are selected, the buttons will function
as Double-Click, Drag and Left Click, reading from left to right. Also note that the function of the sockets for external switches reproduces the function of the
internal buttons, according to the above settings. The unit automatically leaves Learn Mode, and any changes are discarded, if the settings remain unchanged for more than a minute. | Any information that you draw to answer any questions must come only from the information found in the prompt. Under no circumstances are you allowed rely on any information from any source other than the information in the prompt. If the answer requires a series of steps, list them in a numbered list format.
There are a number of settings to allow you to configure OPTIMA Joystick to your exact requirements. These are all programmed using Learn Mode and are stored in an internal, non-volatile memory so they are automatically recalled each time you use the unit, even if you swap computers.
To make changes to the settings, you must first go into Learn Mode. Press and hold the middle button until a warbling tone is heard. The unit is now in Learn Mode and is able to accept changes to the settings, as follows:
Learn Mode
Features
• Plug and Play USB and PS/2 operation and requires no drivers.
• PC, Mac and Chromebook compatible.
• Switchable to Gaming output for full compatibility
with Xbox Adaptive Controller
• Light touch joystick movement.
• User-selectable cursor speed settings.
• Drag lock and double click features.
• Sockets to operate left and right click from remote switches.
• Robust construction and ergonomic design.
• Industry-standard mounting option.
• Optional left-handed operation.
Cursor Speed
To change the speed setting while in Learn Mode, press the middle button briefly. Each time you do so, the unit emits a number of beeps, between 1 and 4. One beep indicates the lowest speed and 4 the highest. The speed of the cursor changes immediately, allowing you to experiment until the best setting is found.
Left-Handed Operation
The left and right buttons may be swapped around, which is particularly useful for left-landed users. To change this setting, press the left button while in Learn Mode. One beep indicates the unit is set to standard ‘right-handed’ mode, whereas two beeps indicates ‘left-handed’ operation.
Double Click
Right-click may be substituted with Double-Click, which is useful for users who have difficulty in double-clicking quickly enough for the computer to recognise. To change this setting, press the right button briefly while in Learn Mode. One beep indicates the unit is set to standard ‘right-click’ mode, whereas two beeps indicates ‘Double-Click’ operation.
Buzzer On/Off
OPTIMA Joystick is fitted with a buzzer which gives an audible indication of operations such as drag lock and unlock, double-click, entering Learn Mode etc. When OPTIMA Joystick is used in a classroom setting, where there may be many units in close proximity, it may be beneficial to turn off the buzzer. To achieve this, press and hold the right button while in Learn Mode, until two long beeps are heard. The buzzer is now disabled, although it will still operate while in Learn Mode. Repeating the above operation will re-enable it.
All of the above settings may be changed as often as required while in Learn Mode, allowing you to experiment with the settings until the best configuration is found. Once you are happy with the settings, they may be stored in the non-volatile memory by pressing and holding the middle button once again, until the warbling tone is heard. Normal operation then resumes. Note that if both left-handed operation and Double-Click are selected, the buttons will function
as Double-Click, Drag and Left Click, reading from left to right. Also note that the function of the sockets for external switches reproduces the function of the
internal buttons, according to the above settings. The unit automatically leaves Learn Mode, and any changes are discarded, if the settings remain unchanged for more than a minute.
How many sounds would be heard if a user wants to activate right-handed operation, increase the cursor speed to 2, activate double click, and turn the buzzer off on a new device? |
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. | Aside from meetings with each other, what are some shared responsibilities of both family and administrators? | Family’s Responsibility
• Notify the school of the child’s allergies before the student attends classes.
• Work with the school team to develop an individualized healthcare plan (IHP) that
accommodates the child’s needs throughout the school including in the classroom, in the
cafeteria, in after-care programs, during school-sponsored activities, and on the school bus,
as well as an Emergency Action Plan.
• Provide current written medical documentation, instructions, and medications as directed by
your child’s HCP. Include a photo of the child for identification safety if requested.
Additional district forms will need to be completed by the student, parent, and HCP if the
student self-carries and administers medication(s). These forms must be renewed yearly or
more often if changes in your child’s condition or HCP orders occur.
• Sign release of information forms to allow communication between the school district and
your child’s HCP to allow for the best possible care for your child.
• Provide adequate properly labeled medications for your child and backup medication in the
school office if your child self-administers their medication. Replace medications after use
or upon expiration.
• Educate the child in the self-management of their food allergy including:
o recognition of safe and unsafe foods
o recognition of other allergen containing materials such as art or science supplies, band
aids, or other school supplies
o strategies for avoiding allergen exposure such as peer pressure and engaging in high-risk
activities that would increase allergen exposure
o identification, recognizing, and describing symptoms of allergic reactions
o how and when to tell an adult they may be having an allergy-related problem
o how to read food and other product labels for allergen identification (age appropriate)
o knowledge of school policies and procedures, including responsibilities in self-carrying
and administration of medications when appropriate
o practice drills and role-playing
• Provide emergency contact information and keep this up to date when changes occur.
• Notify the school nurse if changes in the IHP/EAP are needed.
• Debrief with school staff, the student’s HCP, and the student (age appropriate) after a
reaction has occurred.
• Inform school administration, school nurse, or counselor if bullying or teasing occurs.
Approve a safe classroom treat alternative to ensure student will not be excluded from any
classroom or school sponsored activity involving food.
• Submit to food service a signed “Medical Statement for Student Requiring Special Meals”
form.
School’s/Administrator’s Responsibilities
• Be knowledgeable about and follow applicable federal laws including ADA, IDEA, Section
504, and FERPA and any state laws or district policies that apply.
• Support and oversee faculty, staff, students, and parent/guardian in implementing all aspects
of the management plan.
• Ensure students with allergies are not excluded from school activities due to their health
condition.
• Identify a core team of, but not limited to, school nurse, teacher, principal, school food
service manager, transportation director, counselor (if available) to work with parents and the
student (age appropriate) to establish a prevention plan distinguishing between buildingwide, classroom and individual approaches to allergy prevention and management. Changes
to the prevention plan to promote revisions in allergy management should be made with core
team participation.
• Provide input to the core team in the development and implementation of related policies and
procedures. Ensure implementation of these policies and procedures.
• Ensure annual training of all staff interacting with the student on a regular basis to:
understand the student’s specific allergy(s), recognize allergic symptoms, and know actions
to take in an emergency (including epinephrine administration as directed). Work with
school staff to eliminate the use of potential allergens in the student’s meals, educational
tools, arts and crafts projects. All school staff are to be annually trained by the school nurse
in general information regarding recognition, prevention and response to allergic reactions.
• Ensure protocols are in place for training substitute staff who may have responsibility for a
student with a life-threatening allergy including teachers, school nurses, nutrition services,
recess and/or lunch aides, bus driver, and other specialists.
o Include any responsibilities expected of these individuals to implement specific IHP/EAP
or school-specific food allergy policies. Contingency plans must be in place if a
substitute cannot be trained to handle an allergy emergency.
• Assure for the age-appropriate education of all students including potential causes of allergic
reactions, information on avoiding allergens, signs and symptoms of allergic reactions and
simple steps students can take to keep classmates safe.
• Provide for practice of the Emergency Action Plan before an allergic reaction occurs to
assure the efficiency/effectiveness of the plans.
Coordinate with the school nurse to assure medications are appropriately stored, and an
emergency kit(s) is available and accessible containing a current standing order for
epinephrine from an HCP (as allowed by school district policy).
• Assure that protocols permit students to carry their own epinephrine after approval from the
student’s HCP, parent, and school nurse.
Work with the school nurse in designation of school personnel who are properly trained to
administer emergency medications in accordance with all applicable state laws and school
district policy during the school day and all school activities (including field trips).
• Ensure posting of a list of Cardio Pulmonary Resuscitation (CPR) certified staff in the
building and a system for communicating with them and eliciting an immediate response in
emergencies.
• Ensure systems are in place to inform the parent/guardian(s) if any student experiences an
allergic reaction at school.
• Review policies/prevention plan with the core team members, parents/guardians, student (age
appropriate), and HCP as appropriate after a reaction has occurred.
• Work with the district transportation director to assure that school bus driver training
includes symptom awareness and actions to be taken if a reaction occurs.
• Recommend that all buses have communication devices in case of an emergency.
• Enforce a “no eating” policy on school buses with exceptions made only to accommodate
special needs under federal or similar laws, or school district policy.
• Encourage a “no sharing” policy in lunchrooms and provide for the identification of “allergyaware” tables. Ensure surfaces are cleaned according to district policy/procedures to avoid
exposure by cross contamination.
• Discuss field trips with the family to decide appropriate strategies for managing the student’s
allergy(s).
• Follow federal/state/district laws and regulations regarding sharing medical information
about the student.
• Provide safe environments, both physically and emotionally (develop and enforce strict antibullying policies).
• Ensure after-hours users of the school building are informed of and following all restrictions
and rules impacting the use of common spaces and individual classrooms.
• Discourage school staff from the use of food or other allergen products such as latex balloons
as a reward for school activities. The building administrator must approve any food
preparation or consumption in any instructional area. | System instructions: This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge.
question: Aside from meetings with each other, what are some shared responsibilities of both family and administrators?
context block: Family’s Responsibility
• Notify the school of the child’s allergies before the student attends classes.
• Work with the school team to develop an individualized healthcare plan (IHP) that
accommodates the child’s needs throughout the school including in the classroom, in the
cafeteria, in after-care programs, during school-sponsored activities, and on the school bus,
as well as an Emergency Action Plan.
• Provide current written medical documentation, instructions, and medications as directed by
your child’s HCP. Include a photo of the child for identification safety if requested.
Additional district forms will need to be completed by the student, parent, and HCP if the
student self-carries and administers medication(s). These forms must be renewed yearly or
more often if changes in your child’s condition or HCP orders occur.
• Sign release of information forms to allow communication between the school district and
your child’s HCP to allow for the best possible care for your child.
• Provide adequate properly labeled medications for your child and backup medication in the
school office if your child self-administers their medication. Replace medications after use
or upon expiration.
• Educate the child in the self-management of their food allergy including:
o recognition of safe and unsafe foods
o recognition of other allergen containing materials such as art or science supplies, band
aids, or other school supplies
o strategies for avoiding allergen exposure such as peer pressure and engaging in high-risk
activities that would increase allergen exposure
o identification, recognizing, and describing symptoms of allergic reactions
o how and when to tell an adult they may be having an allergy-related problem
o how to read food and other product labels for allergen identification (age appropriate)
o knowledge of school policies and procedures, including responsibilities in self-carrying
and administration of medications when appropriate
o practice drills and role-playing
• Provide emergency contact information and keep this up to date when changes occur.
• Notify the school nurse if changes in the IHP/EAP are needed.
• Debrief with school staff, the student’s HCP, and the student (age appropriate) after a
reaction has occurred.
• Inform school administration, school nurse, or counselor if bullying or teasing occurs.
Approve a safe classroom treat alternative to ensure student will not be excluded from any
classroom or school sponsored activity involving food.
• Submit to food service a signed “Medical Statement for Student Requiring Special Meals”
form.
School’s/Administrator’s Responsibilities
• Be knowledgeable about and follow applicable federal laws including ADA, IDEA, Section
504, and FERPA and any state laws or district policies that apply.
• Support and oversee faculty, staff, students, and parent/guardian in implementing all aspects
of the management plan.
• Ensure students with allergies are not excluded from school activities due to their health
condition.
• Identify a core team of, but not limited to, school nurse, teacher, principal, school food
service manager, transportation director, counselor (if available) to work with parents and the
student (age appropriate) to establish a prevention plan distinguishing between buildingwide, classroom and individual approaches to allergy prevention and management. Changes
to the prevention plan to promote revisions in allergy management should be made with core
team participation.
• Provide input to the core team in the development and implementation of related policies and
procedures. Ensure implementation of these policies and procedures.
• Ensure annual training of all staff interacting with the student on a regular basis to:
understand the student’s specific allergy(s), recognize allergic symptoms, and know actions
to take in an emergency (including epinephrine administration as directed). Work with
school staff to eliminate the use of potential allergens in the student’s meals, educational
tools, arts and crafts projects. All school staff are to be annually trained by the school nurse
in general information regarding recognition, prevention and response to allergic reactions.
• Ensure protocols are in place for training substitute staff who may have responsibility for a
student with a life-threatening allergy including teachers, school nurses, nutrition services,
recess and/or lunch aides, bus driver, and other specialists.
o Include any responsibilities expected of these individuals to implement specific IHP/EAP
or school-specific food allergy policies. Contingency plans must be in place if a
substitute cannot be trained to handle an allergy emergency.
• Assure for the age-appropriate education of all students including potential causes of allergic
reactions, information on avoiding allergens, signs and symptoms of allergic reactions and
simple steps students can take to keep classmates safe.
• Provide for practice of the Emergency Action Plan before an allergic reaction occurs to
assure the efficiency/effectiveness of the plans.
Coordinate with the school nurse to assure medications are appropriately stored, and an
emergency kit(s) is available and accessible containing a current standing order for
epinephrine from an HCP (as allowed by school district policy).
• Assure that protocols permit students to carry their own epinephrine after approval from the
student’s HCP, parent, and school nurse.
Work with the school nurse in designation of school personnel who are properly trained to
administer emergency medications in accordance with all applicable state laws and school
district policy during the school day and all school activities (including field trips).
• Ensure posting of a list of Cardio Pulmonary Resuscitation (CPR) certified staff in the
building and a system for communicating with them and eliciting an immediate response in
emergencies.
• Ensure systems are in place to inform the parent/guardian(s) if any student experiences an
allergic reaction at school.
• Review policies/prevention plan with the core team members, parents/guardians, student (age
appropriate), and HCP as appropriate after a reaction has occurred.
• Work with the district transportation director to assure that school bus driver training
includes symptom awareness and actions to be taken if a reaction occurs.
• Recommend that all buses have communication devices in case of an emergency.
• Enforce a “no eating” policy on school buses with exceptions made only to accommodate
special needs under federal or similar laws, or school district policy.
• Encourage a “no sharing” policy in lunchrooms and provide for the identification of “allergyaware” tables. Ensure surfaces are cleaned according to district policy/procedures to avoid
exposure by cross contamination.
• Discuss field trips with the family to decide appropriate strategies for managing the student’s
allergy(s).
• Follow federal/state/district laws and regulations regarding sharing medical information
about the student.
• Provide safe environments, both physically and emotionally (develop and enforce strict antibullying policies).
• Ensure after-hours users of the school building are informed of and following all restrictions
and rules impacting the use of common spaces and individual classrooms.
• Discourage school staff from the use of food or other allergen products such as latex balloons
as a reward for school activities. The building administrator must approve any food
preparation or consumption in any instructional area. |
Use only the provided text to form a concise answer. | Summarize only the different types of Lupus that generally affect the organs. | What is Lupus?
Lupus is a chronic, autoimmune disease that can damage any part of the body (skin,
joints, and/or organs inside the body). Chronic means that the signs and symptoms
tend to last longer than six weeks and often for many years. In lupus, something
goes wrong with the immune system, which is the part of the body that fights off
viruses, bacteria, and germs ("foreign invaders," like the flu). Normally our immune
system produces proteins called antibodies that protect the body from these
invaders. Autoimmune means the immune system cannot tell the difference between
these foreign invaders and the body’s healthy tissues ("auto" means "self") and
creates autoantibodies that attack and destroy healthy tissue. These autoantibodies
cause inflammation, pain, and damage in various parts of the body.
Lupus is also a disease of flares (the symptoms worsen and the patient feels
ill) and remissions (the symptoms improve and the patient feels better). Lupus
can range from mild to life-threatening and should always be treated by a
doctor. With good medical care, most people with lupus can lead a full life.
Lupus is not contagious, not even through sexual contact. You cannot "catch"
lupus from someone or "give" lupus to someone.
Lupus is not like or related to cancer. Cancer is a condition of malignant,
abnormal tissues that grow rapidly and spread into surrounding tissues.
Lupus is an autoimmune disease, as described above.
Lupus is not like or related to HIV (Human Immune Deficiency Virus) or AIDS
(Acquired Immune Deficiency Syndrome). In HIV or AIDS the immune system is
underactive; in lupus, the immune system is overactive.
It is estimated that at least 1.5 million Americans have lupus. The actual
number may be higher; however, there have been no large-scale studies to
show the actual number of people in the U.S. living with lupus.
It is believed that 5 million people throughout the world have a form of lupus.
Lupus Information Sheet (continued)
Lupus strikes mostly women of childbearing age (15-44). However, men,
children, and teenagers develop lupus, too.
Women of color are 2-3 times more likely to develop lupus.
People of all races and ethnic groups can develop lupus.
More than 16,000 new cases of lupus are reported annually across the
country.
What causes Lupus?
Genes
No gene or group of genes has been proven to cause lupus. Lupus does, however,
appear in certain families, and when one of two identical twins has lupus, there is
an increased chance that the other twin will also develop the disease. These
findings, as well as others, strongly suggest that genes are involved in the
development of lupus. Although lupus can develop in people with no family history
of lupus, there are likely to be other autoimmune diseases in some family members.
Certain ethnic groups (people of African, Asian, Hispanic/Latino, Native American,
Native Hawaiian, or Pacific Island descent) have a greater risk of developing lupus,
which may be related to genes they have in common.
Environment
While a person’s genes may increase the chance that he or she will develop lupus, it
takes some kind of environmental trigger to set off the illness or to bring on a flare.
Examples include:
ultraviolet rays from the sun
ultraviolet rays from fluorescent light bulbs
sulfa drugs, which make a person more sensitive to the sun, such as: Bactrim®
and Septra® (trimethoprim-sulfamethoxazole); sulfisoxazole (Gantrisin®);
tolbutamide (Orinase®); sulfasalazine (Azulfidine®); diuretics
sun-sensitizing tetracycline drugs such as minocycline (Minocin®)
penicillin or other antibiotic drugs such as: amoxicillin (Amoxil®); ampicillin
(Ampicillin Sodium ADD-Vantage®); cloxacillin (Cloxapen®)
an infection
a cold or a viral illness
exhaustion
an injury
Lupus Information Sheet Ver3.0 – July 2013 Page - 2
Lupus Information Sheet (continued)
emotional stress, such as a divorce, illness, death in the family, or other life
complications
anything that causes stress to the body, such as surgery, physical harm,
pregnancy, or giving birth
Although many seemingly unrelated factors can trigger the onset of lupus in a
susceptible person, scientists have noted some common features among many
people who have lupus, including:
exposure to the sun
an infection
being pregnant
giving birth
a drug taken to treat an illness
However, many people cannot remember or identify any specific factor that
occurred before they were diagnosed with lupus.
Hormones
Hormones are the body’s messengers and they regulate many of the body’s
functions. In particular, the sex hormone estrogen plays a role in lupus. Men and
women both produce estrogen, but estrogen production is much greater in females.
Many women have more lupus symptoms before menstrual periods and/or during
pregnancy, when estrogen production is high. This may indicate that estrogen
somehow regulates the severity of lupus. However, it does not mean that estrogen,
or any other hormone for that matter, causes lupus.
Types of Lupus?
Systemic Lupus Erythematosus. Systemic lupus is the most common form of lupus,
and is what most people mean when they refer to "lupus." Systemic lupus can be
mild or severe. Some of the more serious complications involving major organ
systems are:
inflammation of the kidneys (lupus nephritis), which can affect the body’s
ability to filter waste from the blood and can be so damaging that dialysis
or kidney transplant may be needed
an increase in blood pressure in the lungs (pulmonary hypertension)
Lupus Information Sheet Ver3.0 – July 2013 Page - 3
Lupus Information Sheet (continued)
inflammation of the nervous system and brain, which can cause memory
problems, confusion, headaches, and strokes
inflammation in the brain’s blood vessels, which can cause high fevers,
seizures, behavioral changes,
hardening of the arteries (coronary artery disease), which is a buildup of
deposits on coronary artery walls that can lead to a heart attack
Cutaneous Lupus Erythematosus. Cutaneous refers to the skin, and this form of
lupus is limited to the skin. Although there are many types of rashes and lesions
(sores) caused by cutaneous lupus, the most common rash is raised, scaly and red,
but not itchy. It is commonly known as a discoid rash, because the areas of rash are
shaped like disks, or circles. Another common example of cutaneous lupus is a rash
over the cheeks and across the bridge of the nose, known as the butterfly rash.
Other rashes or sores may appear on the face, neck, or scalp (areas of the skin that
are exposed to sunlight or fluorescent light), or in the mouth, nose, or vagina. Hair
loss and changes in the pigment, or color, of the skin are also symptoms of
cutaneous lupus.
Approximately 10 percent of people who have cutaneous lupus will develop
systemic lupus. However, it is likely that these people already had systemic lupus,
with the skin rash as their main symptom.
Drug-induced Lupus Erythematosus. Drug-induced lupus is a lupus-like disease
caused by certain prescription drugs. The symptoms of drug-induced lupus are
similar to those of systemic lupus, but only rarely will any major organs be affected.
The drugs most commonly connected with drug-induced lupus are hydralazine (used
to treat high blood pressure or hypertension), procainamide (used to treat irregular
heart rhythms), and isoniazid (used to treat tuberculosis). Drug-induced lupus is more
common in men because they are given these drugs more often; however, not
everyone who takes these drugs will develop drug-induced lupus. The lupus-like
symptoms usually disappear within six months after these medications are stopped.
Neonatal Lupus. Neonatal lupus is a rare condition that affects infants of women
who have lupus and is caused by antibodies from the mother acting upon the infant
in the womb. At birth, the infant may have a skin rash, liver problems, or low blood
cell counts, but these symptoms disappear completely after several months with no
lasting effects. Some infants with neonatal lupus can also have a serious heart
defect. With proper testing, physicians can now identify most at-risk mothers, and
the infant can be treated at or before birth. Most infants of mothers with lupus are
entirely healthy. | System Instruction: Use only the provided text to form a concise answer.
Context:
What is Lupus?
Lupus is a chronic, autoimmune disease that can damage any part of the body (skin,
joints, and/or organs inside the body). Chronic means that the signs and symptoms
tend to last longer than six weeks and often for many years. In lupus, something
goes wrong with the immune system, which is the part of the body that fights off
viruses, bacteria, and germs ("foreign invaders," like the flu). Normally our immune
system produces proteins called antibodies that protect the body from these
invaders. Autoimmune means the immune system cannot tell the difference between
these foreign invaders and the body’s healthy tissues ("auto" means "self") and
creates autoantibodies that attack and destroy healthy tissue. These autoantibodies
cause inflammation, pain, and damage in various parts of the body.
Lupus is also a disease of flares (the symptoms worsen and the patient feels
ill) and remissions (the symptoms improve and the patient feels better). Lupus
can range from mild to life-threatening and should always be treated by a
doctor. With good medical care, most people with lupus can lead a full life.
Lupus is not contagious, not even through sexual contact. You cannot "catch"
lupus from someone or "give" lupus to someone.
Lupus is not like or related to cancer. Cancer is a condition of malignant,
abnormal tissues that grow rapidly and spread into surrounding tissues.
Lupus is an autoimmune disease, as described above.
Lupus is not like or related to HIV (Human Immune Deficiency Virus) or AIDS
(Acquired Immune Deficiency Syndrome). In HIV or AIDS the immune system is
underactive; in lupus, the immune system is overactive.
It is estimated that at least 1.5 million Americans have lupus. The actual
number may be higher; however, there have been no large-scale studies to
show the actual number of people in the U.S. living with lupus.
It is believed that 5 million people throughout the world have a form of lupus.
Lupus Information Sheet (continued)
Lupus strikes mostly women of childbearing age (15-44). However, men,
children, and teenagers develop lupus, too.
Women of color are 2-3 times more likely to develop lupus.
People of all races and ethnic groups can develop lupus.
More than 16,000 new cases of lupus are reported annually across the
country.
What causes Lupus?
Genes
No gene or group of genes has been proven to cause lupus. Lupus does, however,
appear in certain families, and when one of two identical twins has lupus, there is
an increased chance that the other twin will also develop the disease. These
findings, as well as others, strongly suggest that genes are involved in the
development of lupus. Although lupus can develop in people with no family history
of lupus, there are likely to be other autoimmune diseases in some family members.
Certain ethnic groups (people of African, Asian, Hispanic/Latino, Native American,
Native Hawaiian, or Pacific Island descent) have a greater risk of developing lupus,
which may be related to genes they have in common.
Environment
While a person’s genes may increase the chance that he or she will develop lupus, it
takes some kind of environmental trigger to set off the illness or to bring on a flare.
Examples include:
ultraviolet rays from the sun
ultraviolet rays from fluorescent light bulbs
sulfa drugs, which make a person more sensitive to the sun, such as: Bactrim®
and Septra® (trimethoprim-sulfamethoxazole); sulfisoxazole (Gantrisin®);
tolbutamide (Orinase®); sulfasalazine (Azulfidine®); diuretics
sun-sensitizing tetracycline drugs such as minocycline (Minocin®)
penicillin or other antibiotic drugs such as: amoxicillin (Amoxil®); ampicillin
(Ampicillin Sodium ADD-Vantage®); cloxacillin (Cloxapen®)
an infection
a cold or a viral illness
exhaustion
an injury
Lupus Information Sheet Ver3.0 – July 2013 Page - 2
Lupus Information Sheet (continued)
emotional stress, such as a divorce, illness, death in the family, or other life
complications
anything that causes stress to the body, such as surgery, physical harm,
pregnancy, or giving birth
Although many seemingly unrelated factors can trigger the onset of lupus in a
susceptible person, scientists have noted some common features among many
people who have lupus, including:
exposure to the sun
an infection
being pregnant
giving birth
a drug taken to treat an illness
However, many people cannot remember or identify any specific factor that
occurred before they were diagnosed with lupus.
Hormones
Hormones are the body’s messengers and they regulate many of the body’s
functions. In particular, the sex hormone estrogen plays a role in lupus. Men and
women both produce estrogen, but estrogen production is much greater in females.
Many women have more lupus symptoms before menstrual periods and/or during
pregnancy, when estrogen production is high. This may indicate that estrogen
somehow regulates the severity of lupus. However, it does not mean that estrogen,
or any other hormone for that matter, causes lupus.
Types of Lupus?
Systemic Lupus Erythematosus. Systemic lupus is the most common form of lupus,
and is what most people mean when they refer to "lupus." Systemic lupus can be
mild or severe. Some of the more serious complications involving major organ
systems are:
inflammation of the kidneys (lupus nephritis), which can affect the body’s
ability to filter waste from the blood and can be so damaging that dialysis
or kidney transplant may be needed
an increase in blood pressure in the lungs (pulmonary hypertension)
Lupus Information Sheet Ver3.0 – July 2013 Page - 3
Lupus Information Sheet (continued)
inflammation of the nervous system and brain, which can cause memory
problems, confusion, headaches, and strokes
inflammation in the brain’s blood vessels, which can cause high fevers,
seizures, behavioral changes,
hardening of the arteries (coronary artery disease), which is a buildup of
deposits on coronary artery walls that can lead to a heart attack
Cutaneous Lupus Erythematosus. Cutaneous refers to the skin, and this form of
lupus is limited to the skin. Although there are many types of rashes and lesions
(sores) caused by cutaneous lupus, the most common rash is raised, scaly and red,
but not itchy. It is commonly known as a discoid rash, because the areas of rash are
shaped like disks, or circles. Another common example of cutaneous lupus is a rash
over the cheeks and across the bridge of the nose, known as the butterfly rash.
Other rashes or sores may appear on the face, neck, or scalp (areas of the skin that
are exposed to sunlight or fluorescent light), or in the mouth, nose, or vagina. Hair
loss and changes in the pigment, or color, of the skin are also symptoms of
cutaneous lupus.
Approximately 10 percent of people who have cutaneous lupus will develop
systemic lupus. However, it is likely that these people already had systemic lupus,
with the skin rash as their main symptom.
Drug-induced Lupus Erythematosus. Drug-induced lupus is a lupus-like disease
caused by certain prescription drugs. The symptoms of drug-induced lupus are
similar to those of systemic lupus, but only rarely will any major organs be affected.
The drugs most commonly connected with drug-induced lupus are hydralazine (used
to treat high blood pressure or hypertension), procainamide (used to treat irregular
heart rhythms), and isoniazid (used to treat tuberculosis). Drug-induced lupus is more
common in men because they are given these drugs more often; however, not
everyone who takes these drugs will develop drug-induced lupus. The lupus-like
symptoms usually disappear within six months after these medications are stopped.
Neonatal Lupus. Neonatal lupus is a rare condition that affects infants of women
who have lupus and is caused by antibodies from the mother acting upon the infant
in the womb. At birth, the infant may have a skin rash, liver problems, or low blood
cell counts, but these symptoms disappear completely after several months with no
lasting effects. Some infants with neonatal lupus can also have a serious heart
defect. With proper testing, physicians can now identify most at-risk mothers, and
the infant can be treated at or before birth. Most infants of mothers with lupus are
entirely healthy.
Summarize only the different types of Lupus that usually affect the organs. |
Please answer questions using the text found in the prompt only. Do not use any external information whatsoever! | If I had just met a person of the advisory committee for the first time yesterday, would this count as a covered relationship? | Under section 502, the following scenario would also raise a
potential appearance issue: where a person (or entity) with whom the
advisory committee member has a “covered relationship” is or represents a “party to the matter” coming before the advisory committee.
Both “covered relationship” and “party to the matter” are described below. “Covered Relationship”: Section 502(b)(1) provides that a
member has a “covered relationship” with the following people and
entities:
(i) A person with whom the member has or is seeking a business,
contractual, or other financial relationship other than a routine
consumer transaction;
27
(ii) A person who is a member of her household or a relative with
whom she has a close personal relationship;
(iii) A person or entity for which the member has, within the last
year,28 served as an employee, officer, director, consultant, agent,
attorney, trustee, contractor, or general partner;
(iv) A person or entity for which the member’s spouse, parent, or
dependent child currently serves or is seeking to serve as an
employee, officer, director, consultant, contractor, agent, attorney,
trustee, or general partner; and
(v) An organization, other than a political party, in which the
member is an “active participant.” Mere membership in an
organization, payment of dues, or the donation or solicitation of
financial support does not, by itself, constitute active participation. | System Instructions: Please answer questions using the text found in the prompt only. Do not use any external information whatsoever!
Prompt: If I had just met a person of the advisory committee for the first time yesterday, would this count as a covered relationship?
Context Block: Under section 502, the following scenario would also raise a
potential appearance issue: where a person (or entity) with whom the
advisory committee member has a “covered relationship” is or represents a “party to the matter” coming before the advisory committee.
Both “covered relationship” and “party to the matter” are described below. “Covered Relationship”: Section 502(b)(1) provides that a
member has a “covered relationship” with the following people and
entities:
(i) A person with whom the member has or is seeking a business,
contractual, or other financial relationship other than a routine
consumer transaction;
27
(ii) A person who is a member of her household or a relative with
whom she has a close personal relationship;
(iii) A person or entity for which the member has, within the last
year,28 served as an employee, officer, director, consultant, agent,
attorney, trustee, contractor, or general partner;
(iv) A person or entity for which the member’s spouse, parent, or
dependent child currently serves or is seeking to serve as an
employee, officer, director, consultant, contractor, agent, attorney,
trustee, or general partner; and
(v) An organization, other than a political party, in which the
member is an “active participant.” Mere membership in an
organization, payment of dues, or the donation or solicitation of
financial support does not, by itself, constitute active participation. |
Use the source provided only. | What is the history of taxes in the United States? | Taxes in the United States:
History, Fairness, and
Current Political Issues
by Brian Roach
A GDAE Teaching Module
on Social and Environmental
Issues in Economics
Global Development And Environment Institute
Tufts University
Medford, MA 02155
http://ase.tufts.edu/gdae
Copyright © 2010 Global Development And Environment Institute, Tufts University.
Copyright release is hereby granted for instructors to copy this module for instructional purposes.
Students may also download the module directly from http://ase.tufts.edu/gdae.
Comments and feedback from course use are welcomed:
Tufts University Global Development And Environment Institute
Tufts University
Medford, MA 02155
http://ase.tufts.edu/gdae
E-mail: [email protected]
I. INTRODUCTION
“The hardest thing in the world to understand is income tax!” – Albert Einstein
Taxes are complicated. The U.S. federal tax code contains over three million words –
about 6,000 pages. A casual browsing of the tax code’s table of contents offers a glimpse
into the vast complexity of federal taxation. Entire sections of the tax code apply
specifically to the taxation of vaccines (Sec. 4131-4132), shipowners' mutual protection
and indemnity associations (Sec. 526), specially sweetened natural wines (Sec. 5385),
and life insurance companies (Sec. 801-818). Annual changes to the tax code imply that
taxes will continue to become more complex even as politicians tout tax simplification.
Taxes levied by other jurisdictions, such as states and cities, add further complexity to
taxation in the U.S. Americans spend billions of hours each year working on their taxes,
not to mention the costs of accountants and tax preparers.
Fortunately, one needn’t comprehend the imposing complexity of the tax code to
understand the crucial role of taxes in American society. Taxation is an important, but
commonly neglected, topic for students of economics, political science, and other
disciplines. Tax policy has important economic consequences, both for the national
economy and for particular groups within the economy. Tax policies are often designed
with the intention of stimulating economic growth – although economists differ
drastically about which policies are most effective at fostering growth. Taxes can create
incentives promoting desirable behavior and disincentives for unwanted behavior.
Taxation provides a means to redistribute economic resources towards those with low
incomes or special needs. Taxes provide the revenue needed for critical public services
such as social security, health care, national defense, and education.
Taxation is as much of a political issue as an economic issue. Political leaders have used
tax policy to promote their agendas by initiating various tax reforms: decreasing (or
increasing) tax rates, changing the definition of taxable income, creating new taxes on
specific products, etc. Of course, no one particularly wants to pay taxes. Specific
groups, such as small-business owners, farmers, or retired individuals, exert significant
political effort to reduce their share of the tax burden. The voluminous tax code is
packed with rules that benefit a certain group of taxpayers while inevitably shifting more
of the burden to others. Tax policy clearly reflects the expression of power in the U.S. –
those without power or favor are left paying more in taxes while others reap the benefits
of lower taxes because of their political influence. Broad attempts to reform the tax
system have produced dramatic and sudden shifts in tax policy, generally motivated by
political factors rather than sound economic theory. For example, the top marginal
federal tax bracket on individual income in the U.S. dropped precipitously from 70% to
28% during the 1980s. Tax policy has clearly been used to promote political, as well as
economic, agendas.
This module is intended to provide a basic understanding of the economic, political, and
social context of the entire U.S. tax system. When most people think about taxes, they
1
tend to think only of the federal income tax. However, looking solely at the federal
income tax would miss several important issues. Perhaps most importantly, the federal
income tax is not the largest tax bill to most Americans. We’ll see that the largest tax for
most Americans is federal social insurance taxation. Also, the federal income tax is one
of the most progressive taxes in the U.S. system. When all taxes are considered, the U.S.
tax system is much less progressive. You may be surprised to find out how many taxes in
the U.S. are actually regressive – hitting low-income households at a disproportionately
high rate.
This module is divided into three major sections. First, some basic terms will be defined
and discussed, including tax progressivity and the differences between several types of
taxes. Second, a brief overview of tax history in the United States will be presented.
Third, data on tax trends will be used to illustrate the changing nature of taxation with a
focus on the overall progressivity of the entire tax system.
II. THE STRUCTURE OF TAXATION IN THE UNITED STATES
Tax Progressivity
The overall system of taxation in the United States is progressive. By a progressive tax
system, we mean that the percentage of income an individual (or household) pays in taxes
tends to increase with increasing income. Not only do those with higher incomes pay
more in total taxes, they pay a higher rate of taxes. This is the essence of a progressive
tax system. For example, a person making $100,000 in a year might pay 25% of their
income in taxes ($25,000 in taxes), while someone with an income of $30,000 might only
pay a 10% tax rate ($3,000 in taxes).
A tax system may also be regressive or proportional. A regressive tax system is one
where the proportion of income paid in taxes tends to decrease as one’s income increases.
A proportional tax system simply means that everyone pays the same tax rate regardless
of income. A particular tax system may display elements of more than one approach.
Consider a hypothetical tax system where one pays a proportional, or flat 1 , rate on
income below a certain dollar amount and then progressively increasing rates above that
dollar amount. Also, within an overall tax system, some particular taxes might be
progressive while other taxes are regressive. We’ll see later on that this the case in the
United States.
The Reasons for Progressive Taxation
The overall tax system of the United States, and in most other countries, is progressive
for a number of reasons. A progressive tax embodies the concept that those with high
incomes should pay more of their income in taxes because of their greater ability to pay
1
This is not exactly the same concept embodied in current proposals for a “flat tax” in the U.S. These
proposals would set just one tax rate but would exclude a given amount of income from taxation. Thus, the
flat tax proposals would retain a small degree of progressivity.
2
without critical sacrifices. By paying a tax, any household must forego an equivalent
amount of spending on goods, services, or investments. For a high-income household,
these foregone opportunities might include a second home, an expensive vehicle, or a
purchase of corporate stock. A low-income household, by comparison, might have to
forego basic medical care, post-secondary education, or vehicle safety repairs. As
income increases, the opportunity costs of paying taxes tend to be associated more with
luxuries rather than basic necessities. The ability-to-pay principle recognizes that a flat
(or regressive) tax rate would impose a larger burden, in terms of foregone necessities, on
low-income households as compared to high-income households.
A progressive tax system is also a mechanism to addresses economic inequalities in a
society. To evaluate a tax system’s impact on inequality, one must consider both the
distribution of taxes paid and the distribution of the benefits derived from tax revenue. If
the benefits of programs funded by taxation primarily benefit low-income households
while high-income households pay the majority of taxes, then the tax system effectively
operates as a transfer mechanism. Increasing the progressivity of the tax system or
altering the distribution of benefits allows greater redistribution of economic resources.
We’ll mainly focus on tax payments in this module but you should also be aware that the
benefits of public expenditures are not evenly distributed throughout society. 2
There is also an economic argument for a progressive tax system – it may yield a given
level of public revenue with the least economic impact. To see why, consider how
households with different levels of income would respond to a $100 tax cut. A lowincome household would tend to quickly spend the entire amount on needed goods and
services – injecting $100 of increased demand into the economy. By comparison, a highincome household might only spend a fraction on goods and services, choosing to save or
invest a portion of the money. The money that a high-income household saves or invests
does not add to the overall level of effective demand in an economy. 3 In economic
terms, we say that the marginal propensity to consume tends to decrease as income
increases. So, by collecting proportionally more taxes from high-income households we
tend to maintain a higher level of effective demand and more economic activity.
Of course, one can posit that a tax system can become too progressive. Extremely high
tax rates at high-income levels might create a significant disincentive that reduces the
productive capacity of society. Very high taxes might limit the risks taken by
entrepreneurs, stifling innovations and technological advances. The desire to “soak the
rich” through an extremely progressive tax system might be viewed as unfair, and not just
by the rich. In fact, this was a concern of the Constitutional framers – that a democratic
majority would eventually impose unduly burdensome taxes on the wealthy minority.
We’ll see that their concerns have proved groundless. Many critics of the current tax
2
The distribution of the benefits derived from public expenditures is, of course, more difficult to determine
that the distribution of tax payments. The distribution of public assistance programs can be easily
measured. However, the distribution of the benefits of scientific research support, business subsidies,
public works, national defense, and other expenditures is a difficult research task.
3
Money saved or invested may, however, provide the financial capital necessary to increase the productive
capacity of the economy. “Supply-side” economists stress the importance of investment by the wealthy as
the key to economic growth.
3
system point to the contrary position – that the powerful minority have used their might
to shift the tax burden away from themselves onto an immobilized and misinformed
majority.
Even if one could devise a tax system that is economically optimal (i.e., producing the
highest overall level of economic growth), the topic of taxation encompasses ideals about
equity and fairness. A society may be willing to sacrifice some degree of economic
growth in exchange for a more equitable distribution of economic resources. This is not
to say that economic growth must always be sacrificed with redistribution. In fact,
analysis of the U.S. historical data finds that high levels of economic growth tend to be
associated with periods of relatively equitable distribution of economic resources
(Krugman, 2002).
We now turn to differentiating between the different types of taxes levied in the U.S.
We’ll first discuss several forms of federal taxation, roughly in order of the revenue they
generate, and then consider taxation at the state and local levels. A final section will
consider taxes that are generally not used in the U.S. but are important in other nations.
Federal Income Taxes
The federal income tax is the most visible, complicated, and debated tax in the U.S. The
federal income tax was established with the ratification of the 16th Amendment to the
U.S. Constitution in 1913. It is levied on wages and salaries as well as income from
many other sources including interest, dividends, capital gains, self-employment income,
alimony, and prizes. To understand the basic workings of federal income taxes, you need
to comprehend only two major issues. First, all income is not taxable – there are
important differences between “total income,” “adjusted gross income,” and “taxable
income.” Second, you need to know the distinction between a person’s “effective tax
rate” and “marginal tax rate.”
Total income is simply the sum of income an individual or couple 4 receives from all
sources. For most people, the largest portion of total income comes from wages or
salaries. Many people also receive investment income from the three standard sources:
interest, capital gains, and dividends. Self-employment income is also included in total
income, along with other types of income such as alimony, farm income, and gambling
winnings.
The amount of federal taxes a person owes is not calculated based on total income.
Instead, once total income is calculated, tax filers are allowed to subtract some expenses
as non-taxable. To obtain adjusted gross income (AGI), certain out-of-pocket expenses
made by a tax filer are subtracted from total income. These expenses include individual
retirement account contributions, allowable moving expenses, student loan interest,
tuition, and a few other expenses. AGI is important because much of the tax data
presented by the IRS are sorted by AGI.
4
Married couples have the option of filing their federal taxes either jointly or separately. Children aged 14
or over with sufficient income ($7,700 in 2002) have to file their own federal income tax returns.
4
However, taxes are not calculated based on AGI either. Taxable income is basically
AGI less deductions and exemptions. Deductions are either standard or itemized. The
standard deduction is a fixed amount excluded from taxation – for the 2009 tax year the
standard deduction was $5,700 for single individuals and $11,400 for married couples.
Tax filers have the option of itemizing their deductions. To itemize, a tax filer adds up
certain expenses made during the year including state taxes, real estate taxes, mortgage
interest, gifts to charity, and major medical expenses. 5 If the itemized deductions
exceed the standard deduction, then the itemized total is deducted instead. Exemptions
are calculated based on the number of tax filers and dependents. A single tax filer with
no dependent children can claim one exemption. A married couple with no children can
claim two exemptions. Each dependent child counts as one more exemption. Additional
exemptions are given for being age 65 or over or blind. In 2009, each exemption
excluded a further $3,650 from taxation. 6
Taxable income is obtained by subtracting the deduction and exemption amounts from
AGI. This is the amount a taxpayer actually pays taxes on. However, the amount of tax
owed is not simply a multiple of taxable income and a single tax rate. The federal
income tax system in the U.S. uses increasing marginal tax rates. This means that
different tax rates apply on different portions of a person’s income. The concept is best
illustrated with an example using the 2009 tax rates. For a single filer, the first $8,350 of
taxable income (not total income or AGI) is taxed at a rate of 10%. Taxable income
above $8,350 but less than $33,950 is taxed at a rate of 15%. Taxable income above
$33,950 but less than $82,250 is taxed at a rate of 25%. Income above $82,250 is taxed
at higher marginal rates – 28%, 33%, and 35%.
Consider how we would calculate the taxes due for a single tax filer (let’s call her Susan)
with no children and a total income of $35,000. Assume Susan contributed $3,000 to an
individual retirement account and that this is her only allowable adjustment expense.
Thus, her AGI is $32,000. She claims one exemption (herself) in the amount of $3,650
and the standard deduction of $5,700. Thus, Susan’s taxable income is $22,650. On the
first $8,350 of taxable income she owes 10% in taxes, or $835. The tax rate on the rest of
her income is 15% for a tax of $2,145, (($22,650 - $8,350) × 0.15). So, her total federal
income tax bill is $2,980, ($835 + $2,145). Note that Susan’s taxable income is $12,350
less than her total income.
While Susan paid a maximum tax rate of 15%, we can see that her effective tax rate is
much lower. An effective tax rate can be calculated based on total income, AGI, or
taxable income. Suppose we wish to calculate Susan’s effective tax rate based on her
total income of $35,000. Given that her federal income tax is $2,980, her effective tax
rate is only 8.5%, (($2,980/$35,000) × 100). If we based her effective tax rate on her
AGI, it would be 9.3%, (($2,980/$32,000) × 100).
5
Note that some expenses, such as moving costs, are subtracted from total income to obtain AGI while
other expenses, such as mortgage interest, are classified as deductions from AGI to obtain taxable income.
6
Those with high incomes (more than $125,100 for an individual) either have their exemption allowance
either reduced or eliminated.
5
Social Insurance Taxes
Taxes for federal social insurance programs, including Social Security, Medicaid, and
Medicare, are taxed separately from income. Social insurance taxes are levied on
salaries and wages, as well as income from self-employment. For those employed by
others, these taxes are generally deducted directly from their paycheck. These deductions
commonly appear as “FICA” taxes – a reference to the Federal Insurance Contributions
Act. Self-employed individuals must pay their social insurance taxes when they file their
federal income tax returns.
Social insurance taxes are actually two separate taxes. The first is a tax of 12.4% of
wages, which is primarily used to fund Social Security. Half of this tax is deducted from
an employee’s paycheck while the employer is responsible for matching this contribution.
The other is a tax of 2.9% for the Medicare program. Again, the employee and employer
each pay half. Thus, social insurance taxes normally amount to a 7.65% deduction from
an employee’s wage (6.2% + 1.45%). Self-employed individuals are responsible for
paying the entire share, 15.3%, themselves.
There is a very important difference between these two taxes. The Social Security tax is
due only on the first $106,800 (in 2009) of income. On income above $106,800, no
additional Social Security tax is paid. In other words, the maximum Social Security tax
in 2009 that would be deducted from total wages is $6,622 ($106,800 × 0.062). The
Medicare tax, however, is paid on all wages. Thus, the Medicare tax is truly a flat tax
while the Social Security tax is a flat tax on the first $106,800 of income but then
becomes a regressive tax when we consider income above this limit.
Consider the impact of social insurance taxes on two individuals, one making a typical
salary of $45,000 and another making $300,000. The typical worker would pay 7.65%
on all income, or $3,443, in federal social insurance taxes. The high-income worker
would pay the maximum Social Security contribution of $6,622 plus $4,350 for Medicare
(1.45% of $300,000) for a total bill of $10,972. This works out to a 3.7% overall tax rate,
or less than half the tax rate paid by the typical worker. As the high-income individual
pays a lower rate of taxation, we see that social insurance taxes are regressive.
Federal Corporate Taxes
Corporations must file federal tax forms that are in many ways similar to the forms
individuals complete. Corporate taxable income is defined as total revenues minus the
cost of goods sold, wages and salaries, depreciation, repairs, interest paid, and other
deductions. Thus corporations, like individuals, can take advantage of many deductions
to reduce their taxable income. In fact, a corporation may have so many deductions that
it actually ends up paying no tax at all or even receives a rebate check from the federal
government. We’ll discuss this issue further later in the module.
Corporate tax rates, like personal income tax rates, are progressive and calculated on a
marginal basis. In 2009, the lowest corporate tax rate, applied to profits lower than
6
$50,000 was 15%. The highest marginal corporate tax rate, applied to profits between
$100,000 and $335,000 was 39%. 7 As with individuals, the effective tax rate
corporations pay is lower than their marginal tax rate.
Federal Excise Taxes
An excise tax is a tax on the production, sale, or use of a particular commodity. The
federal government collects excise taxes from manufacturers and retailers for the
production or sale of a surprising number of products including tires, telephone services,
air travel, transportation fuels, alcohol, tobacco, and firearms.
Unlike a sales tax, which is evident as an addition to the selling price of a product, excise
taxes are normally incorporated into the price of a product. In most cases, consumers are
not directly aware of the federal excise taxes they pay. However, every time you buy
gas, make a phone call, fly in a commercial plane, or buy tobacco products, you are
paying a federal excise tax. For example, the federal excise tax on gasoline as of 2009
was about 18 cents per gallon.
Federal excise taxes are another example of a regressive tax. Lower-income households
tend to spend a greater portion of their income on goods that are subject to federal excise
taxes. This is particularly true for gasoline, tobacco, and alcohol products.
Federal Estate and Gift Taxes
The vast majority of Americans will never be affected by the federal estate or gift taxes.
These taxes apply only to the wealthiest Americans. The estate tax is applied to transfers
of large estates to beneficiaries. Similar to the federal income tax, there is an exemption
amount that is not taxed. Only estates valued above the exemption amount are subject to
the estate tax, and the tax only applies to the value of the estate above the exemption. For
example, if the tax rate were 45% of the exemption amount was $2 million, then the tax
on an estate valued at $3.5 million would be $675,000, ((3,500,000-2,000,000)*0.45).
As of Fall 2010, the future of the estate tax is in limbo. Under the Economic Growth and
Tax Relief Act of 2001, estate taxes rates were gradually reduced, and exemption rates
gradually increased, over the period 2001-2009. In 2001, the exemption amount was
$675,000 million and the tax rate was 55%. For the 2009 tax year, the exemption amount
was $3.5 million and the tax rate was 45%. But for 2010, there is no estate tax at all!
Then, in 2011, the tax is scheduled to be reinstated with an exemption of $1 million and a
tax rate of 55%. The ongoing debate over the estate tax will be covered in more detail
later in this module.
The transfer of large gifts is also subject to federal taxation. The estate tax and gift tax
are complementary because the gift tax essentially prevents people from giving away
their estate to beneficiaries tax-free while they are still alive. In 2009, gifts under
7
For the highest profit bracket – profits above $18,333,333 – the marginal rate was 35%.
7
$13,000 were excluded from the tax. Similar to the federal income tax, the gift tax rates
are marginal and progressive, with a maximum tax rate of 45%.
The estate and gift taxes are the most progressive element of federal taxation. The estate
tax is paid exclusively by those with considerable assets. Even further, the majority of all
estate taxes are paid by a very small number of wealthy taxpayers. According to the Tax
Policy Center, in 2009 the richest 0.1% of those subject to the estate tax pay 42% of the
total estate tax revenue. (Tax Policy Center, 2010).
State and Local Taxes
Like the federal government, state governments also rely on tax revenues to fund public
expenditures and transfer programs. Like the federal government, state governments rely
on several different tax mechanisms including income taxes, excise taxes, and corporate
taxes. Thus, much of the above discussion applies to the tax structures in place in most
states. However, there are some important differences that deserve mention.
First, nearly all states (45 as of 2010) have instituted some type of general sales tax.
State sales tax rates range from 2.9% (Colorado) to 8.25% (California 8 ). A few states
reduce the tax rate on certain goods considered to be necessities, such as food and
prescription drugs. For example, the general sales tax in Illinois is 6.25% but most food
and drug sales are taxed at only 1%. Other states with sales taxes exempt some
necessities from taxation entirely. In most states, localities can charge a separate sales
tax. While local sales taxes are generally lower than state sales taxes, there are
exceptions. In New York the state sales tax is 4% but local sales taxes are often higher
than 4%.
Unlike income taxes, sales taxes tend to be quite regressive. The reason is that lowincome households tend to spend a larger share of their income on taxable items than
high-income households. Consider gasoline – an item that tends to be a smaller share of
total expenditures as income rises. An increase in the state taxes on gasoline impacts
low-income households more than high-income households. Some states, such as Idaho
and Kansas, offer low-income households a tax credit to compensate for the regressive
nature of state sales taxes.
Forty-one states levy an income tax. 9 Most of these states have several progressive tax
brackets (up to 12 rates) similar to the federal income tax. However, state income taxes
tend to be much less progressive than the federal income tax. Six states have only one
income tax rate, meaning that their income tax approaches a flat tax. Several more states
approach a flat tax because the top rate applies at a low income or the rates are relatively
constant. For example, Maine’s two tax rates are 6.50% and 6.85%.
8
Local sales taxes are also levied in some municipalities in California, which can raise the total sales tax to
as high as 10.75%.
9
Two other states, Tennessee and New Hampshire, levy no state income tax but do tax dividends and
interest.
8
Another important distinction between the federal system of taxation and the taxes levied
at state and local levels is use of property taxes. In fact, property taxes tend to be the
largest revenue source for state and local governments. The primary property tax levied
in the U.S. is a tax on real estate, including land, private residences, and commercial
properties. Generally, the tax is an annual assessment calculated as a proportion of the
value of the property, although the formulas used by localities differ significantly.
Property taxes are commonly collected at a local level, but a share of property taxes is
allocated for state purposes. Property taxes tend to be regressive, although less regressive
than excise and sales taxes. The reason is that high-income households tend to have a
lower proportion of their assets subjected to property taxes. While renters do not directly
pay property taxes, most economists conclude that the costs of property taxes are largely
passed on to renters in the form of higher rents.
Composition of Tax Collections in the U.S.
Table 1 presents government tax receipts, by tax source, for 2008 (the most recent year
for which complete data were available). The table shows that federal taxes dominate the
nation’s tax system with nearly 65% of all receipts. The largest federal tax is the income
tax, followed closely by social insurance taxes. State and local tax systems are primarily
dependent on sales, income, and property taxation. The data in Table 1 cover the major
taxes utilized in the United States. To gain a broader perspective on taxation, see Box 1
for a summary of tax mechanisms that are major revenue sources for some countries but
are currently non-existent or insignificant in the U.S.
Table 1. 2008 U.S. Tax Receipts, by Source
Source
Federal Taxes
Income Taxes
Social Insurance Taxes
Corporate Taxes
Excise Taxes
Estate Taxes
Total, Federal Taxes
State Taxes
Sales Taxes
Property Taxes
Income Taxes
Corporate Taxes
Excise and Other Taxes
Total, State Taxes
Total, All Taxes
Amount (Millions $)
Percent of All Taxes
1,145,700
900,200
304,300
67,300
23,000
2,440,500
30.4%
23.9%
8.1%
1.8%
0.6%
64.7%
304,400
409,700
304,600
57,800
253,900
1,330,400
3,770,900
8.1%
10.9%
8.1%
1.5%
6.7%
35.3%
100.0%
Source: U.S. Census Bureau (2010), except for federal estate tax data from Tax Policy Center
(2008).
9
BOX 1. TAX ALTERNATIVES
It is worthwhile to briefly consider tax types that are not currently important in the U.S.
because these mechanisms are used in other countries or are central in various proposals
to reform the U.S. tax system. We summarize five tax types here:
1. National sales tax. This would function similar to a state sales tax – as an
addition to the retail price of certain products. A national sales tax would clearly
be simpler and cheaper to administer than the current federal income tax. It would
also encourage savings because, under most proposals, income that is not spent on
taxable goods and services is not taxed. There are, however, two significant
disadvantages to a national sales tax. First, it would create an incentive for black
market exchanges to evade the tax. Second, it can be highly regressive – similar
to the regressivity of state sales taxes. A national sales tax could be made less
regressive, or even progressive, by providing rebates for low-income households.
2. National consumption tax. This is slightly different from a national sales tax. A
household would pay the tax at the end of the year based on the value of its annual
consumption of goods and services. Consumption can be calculated as total
income less money not spent on goods and services (i.e., invested or saved).
Again, a consumption tax would promote savings by exempting it from taxation.
A consumption tax could also be designed to be progressive by taxing different
levels of consumption at different marginal rates.
3. Value added tax. Most developed countries levy some form of value added tax
(VAT). A VAT is levied at each stage in the production process of a product,
collected from manufacturers according to the value added at each stage. Thus,
the tax is not added to the retail price but incorporated into prices, similar to the
way excise taxes become embedded into the price of products. Compared to a
national sales tax, a VAT reduces the likelihood of black markets.
4. Wealth taxes. While the U.S. tax system includes local property taxes and, at
least for a while, estate taxes, there is no tax on holdings of other assets such as
corporate stocks, bonds, and personal property. Several European countries,
including Sweden, Spain, and Switzerland, have instituted an annual wealth tax.
A wealth tax could be very progressive by setting high rates and becoming
effective only at significant wealth levels.
5. Environmental taxes. These are levied on goods and services in proportion to
their environmental impact. One example is a carbon tax, which taxes products
based on the emissions of carbon attributable to their production or consumption.
The rationale of environmental taxation is that it encourages the use and
development of goods and services with reduced environmental impacts. Like
other taxes on goods and services, environmental taxes can be regressive –
suggesting that environmental taxes need to be combined with other progressive
taxes or rebates for low-income households. Among developed countries, the U.S.
collects the smallest share of tax revenues from environmental taxes (OECD,
2010).
10
III. A BRIEF HISTORY OF TAXATION IN THE U.S. 10
Before the Federal Income Tax
The tax mechanisms used during first 150 years or so of U.S. tax history bears little
resemblance to the current system of taxation. First, the U.S. Constitution restricted
“direct” taxation by the federal government – meaning taxes directly on individuals.
Instead, the federal government relied on indirect taxes including taxes on imports
(tariffs) and excise taxes. Tariffs were the major source of U.S. government receipts
from the beginning of the nation up to the early 1900’s. For example, in 1800 custom
duties comprised about 84% of government receipts (U.S. Census Bureau, 1960).
Internal federal revenue collections (which exclude tariffs on imports) as recently as the
early 20th century were primarily derived from excise taxes on alcohol. In 1900 over
60% of internal revenue collections came from alcohol excise taxes with another 20%
from tobacco excise taxes.
Another important difference is the scale of government taxation and expenditures
relative to the entire economy. Government spending is currently a major portion of the
total U.S. economy – in 2010 government expenditures and investment at all levels
comprised about 20% of total economic output. In the late 1800s government
expenditures were responsible for only about 2% of national output (earlier data on
national output are not available). The role of government has become more prominent
as a result of expansion of military activity and an increase in the provision of public
services. Consequently an overall trend of increasing taxation is evident, although we’ll
see that this trend has recently stabilized or reversed.
The Constitutional framers were wary of a government’s power to tax. Taxation of the
American Colonies by a distant and corrupt England was a driving force behind the
American Revolution. Consequently, they believed in decentralized taxation and
delegated most public revenue collection to localities, which relied primarily on property
taxes. During peacetime the federal government was able to meet its expenses through
relatively modest excise taxes and tariffs. During times of war, such as the War of 1812,
federal taxes were temporarily raised to finance the war or pay down the ensuing debts.
Once the financial crisis passed, taxes were reduced in response to public opposition to
high tax rates.
Like previous wars, the Civil War initiated an increase in both excise tax and tariff rates.
Government revenue collections increased by a factor of seven between 1863 and 1866.
Perhaps the most significant tax policy enacted during the Civil War was the institution
of the first national income tax. Concerns about the legality of the tax, considering the
Constitution’s prohibition of direct taxation, were muted during the national emergency.
The income tax rates were low by modern standards – a maximum rate of 10% along
with generous exemptions meant that only about 10% of households were subject to any
income tax. Still, the income tax generated over 20% of federal revenues in 1865. After
10
The history of taxation is primarily derived from Brownlee (1996).
11
the war, few politicians favored the continuation of the income tax, and in 1872 it was
allowed to expire.
The impetus for the modern federal income tax rests not with a wartime emergency but
with the Populist movement of the late 1800s. The internal tax system in place at the
time, based primarily on excise taxes on alcohol and tobacco, was largely regressive.
The Populists revived interest in an income tax as a means to introduce a progressive tax
based on ability to pay. They saw it as a response to excessive monopoly profits and the
concentration of wealth and power. In other words, the tax was not envisioned as a
means to generate significant additional public revenue but as a vehicle of social justice.
A federal income tax, with a large exemption of $4,000, was instituted in 1894 but the
Supreme Court ruled it unconstitutional in 1895. Over the next couple of decades
proposals were made for a constitutional amendment to establish a federal income tax.
While these attempts were defeated, support for federal income taxation gradually
increased. Eventually, in 1913 the 16th Amendment was ratified creating the legal basis
for the federal income tax.
While the initial income tax was progressive, it was less radical than many desired. In
fact, many conservatives expressed guarded support for the measure to prevent a more
significant tax. While the income tax was targeted towards the wealthy – in the first few
years only about 2% of households paid any income tax – tax rates of only 1%-7%
prevented it from generating significant revenues.
“...virtually none of the income tax proponents within the government believed
that the income tax would become a major, yet alone the dominant, permanent
source of revenue within the consumption-based federal tax system.” (Brownlee,
1996, p. 45)
These views were to quickly change as the nation required a dramatic increase in
revenues to finance World War I.
The Growth of Direct Taxation
Rather than relying on increases in excise taxes and tariffs to finance World War I, the
administration of Woodrow Wilson transformed the income tax framework laid down
just a few years previously. Desiring both to raise additional revenue and enforce social
justice, the top marginal rate increased dramatically from 7% in 1915 to 67% in 1917
(IRS, 2002). Corporate taxes also became an important revenue source, accounting for
over one-quarter of internal revenue collections in 1917. In 1916 the estate tax was
created, not necessarily to generate large revenues but as another instrument of
progressive taxation.
Unlike previous wars, much of the tax system laid down during World War I remained in
place after the war. In the period from 1910 to 1925 tariffs fell from about half of
government receipts to less than 15%. Meanwhile the new corporate and individual
12
income taxes made up nearly half of government receipts in the mid 1920s. The level of
excise tax collections dropped significantly, especially during the years of Prohibition
when alcohol excise taxes virtually disappeared.
The Great Depression, of course, caused a significant decline in federal receipts. In 1932
tax rates were increased in an attempt to boost federal revenue. Franklin Roosevelt, in
the years leading up to World War II, presented progressive taxation as a key element of
the New Deal. However, the most significant measure enacted during this period was the
creation of old-age insurance.
Prior to national social insurance programs, poverty was the common state of the elderly
(Skidmore, 1999). By the 1930s, several European countries had already instituted
programs of social insurance. Germany was the first to establish old-age and survivors
pensions in 1889 (Peterson, 1999). The Great Depression finally motivated policy
makers in the U.S. to enact similar legislation. Rather than funding Social Security
programs through increases in income, or other, taxes, the funding mechanism was a
separate tax, split equally between employers and employees. All employees covered by
the system 11 contributed and received benefits regardless of their income. This design
was intended to protect the system from political attack. As everyone who pays into the
system receives benefits, Social Security is not considered “welfare” that is allocated to
only a segment of the population. Also, because Social Security is a separate tax,
contributors view their old-age payments as entitlements and oppose attempts to weaken
the program. This design has so far proved very successful – Social Security is often
called the “third rail” of American politics (i.e., touch it and you die).
World War II created yet another emergency situation requiring additional revenues.
Similar to Woodrow Wilson during World War I, President Franklin Roosevelt sought to
raise revenues primarily from higher taxes on corporations and high-income households.
Roosevelt went so far as to state that:
“In this time of grave national danger, when all excess income should go to win
the war, no American citizen ought to have a net income, after he has paid his
taxes, of more than $25,000.” (Brownlee, 1996, p. 91)
Roosevelt was unable to obtain enough Congressional support to enact his most
progressive proposals. The ensuing compromise did produce a more progressive federal
income tax but it also became levied on more households. Personal exemptions were
reduced by half between 1939 and 1942 – meaning the income tax reached well into the
middle class for the first time. The taxable income subject to the highest marginal rate
dropped from $5 million in 1941 down to $200,000 in 1942. Also, the top marginal tax
rate reached a record high of 94% in 1944. Another change during World War II was
withholding federal taxes from an employee’s paycheck rather than requiring payment of
11
While Social Security has expanded over the years to cover more employees, all workers are not
currently covered by the system. For example, about one-quarter of state and local government employees
are not included in the system (Peterson, 1999).
13
taxes due at the end of the year. These, as well as other, changes produced a dramatic
shift in the structure of federal taxation:
“Under the new tax system, the number of individual taxpayers grew from 3.9
million in 1939 to 42.6 million in 1945, and federal income tax collections over
the period leaped from $2.2 billion to $35.1 billion. By the end of the war nearly
90 percent of the members of the labor force submitted income-tax returns, and
about 60 percent of the labor force paid income taxes. … At the same time, the
federal government came to dominate the nation’s revenue system. In 1940,
federal income tax had accounted for only 16 percent of the taxes collected by all
levels of government; by 1950 the federal income tax produced more than 51
percent of all collections. Installation of the new regime was the most dramatic
shift in the nation’s tax policies since 1916.” (Brownlee, 1996, p. 96-97)
As in the period after World War I, much of the new tax structure instituted during World
War II remained in place after the war. Both major political parties expressed support for
a progressive but broad income tax, relatively flat tax rates on corporate profits, and
social insurance taxes that were basically regressive. Public support for the existing tax
system was boosted by patriotic feelings and broad-based economic growth after the war.
Changes to the tax system between the end of World War II and the 1980’s were
generally minor. The Social Security tax occasionally increased as more people were
receiving benefits. The initial tax rate of 2% (1% each for employers and employees) had
increased to 6.13% by 1979. The Medicare and Medicaid programs were established in
the 1960s. Across-the-board tax cuts in 1964 reduced marginal rates for both low- and
high-income households (the top marginal rate fell from 91% in 1963 to 70% in 1965).
Still, government continued to become a more significant portion of the entire economy
in the decades after World War II. Total government expenditure and investment
increased gradually from less than 18% of GDP in 1946 to over 22% by the mid 1970s.
From the “Reagan Revolution” to the Bush Tax Cuts
The general stasis of the federal tax system ended in the 1980s with the passage of
several important tax reforms. Ronald Reagan was elected president in 1980 on a
platform of smaller government and lower taxes. The Economic Recovery Tax Act of
1981 (ERTA) enacted the largest tax cut in American history 12 and inspired tax cutting
by many other nations in the 1980s. The supply-side rationale behind ERTA’s sharp
reduction in tax rates, particularly on high-income households and capital, was that
greater incentives would motivate increased investment and economic activity. The
ensuing economic growth and consequent tax revenue growth would, in theory, more
than offset the revenue reductions as a result of the tax cuts. Thus, the theory was that tax
cuts could actually produce an increase in federal revenues and address the growing
federal budget deficit as well. ERTA phased in a reduction in the top tax rate from 70%
to 50%, enacted several corporate tax cuts, and indexed many tax parameters to inflation
(such as personal exemptions and deductions).
12
When measured in constant dollars (adjusted for inflation).
14
Analysis suggests that, in reality, ERTA resulted in the largest reduction in federal
revenues of any tax bill since World War II (Tempalski, 1998). The federal budget
deficit continued to grow. The very next year, in 1982, the largest peacetime tax increase
was passed (Martin, 1991). The act repealed some of the more revenue-reducing
provisions of ERTA, such as accelerated depreciation reductions for corporations, and
closed several corporate loopholes in the tax code. Social Security reforms were enacted
in 1983 that increased Social Security tax rates and initiated taxation of some benefits.
Reagan continued to push for further tax reforms, leading to the Tax Reform Act of 1986
– considered to be the most comprehensive revision of the tax code since the 1950s
(Petska and Strudler, 1999). This act reduced top income tax rates even further – from
50% in 1986 to 28% in 1988. Among many other changes, it also lowered the top
corporate tax rate from 46% to 34%.
Clearly, the “Reagan revolution” is an important era in U.S. tax history, but many people
misinterpret it as a period where the size of the federal government was drastically
reduced and taxes cut significantly. Despite the two major tax cuts during Reagan’s
terms, federal revenue collections increased at nearly the same pace as national output
(total federal revenues increased about 76% from 1980-1988 while GDP increased 83%).
The actual changes were more evident in the distribution of federal revenues than their
total level. The share of revenues from both individual and corporate taxation fell (by 9%
and 16% respectively) while the portion from social insurance taxes increased by 38%.
As the individual and corporate taxes are progressive, while social insurance taxes are
regressive, the outcome was a decrease in the overall progressivity of the federal tax
system. Specific changes within the individual income tax code exacerbated the decline
in progressivity.
The Reagan era failed to control the growing federal deficit. The annual budget deficits
of the federal government tripled during the 1980s 13 (OMB, 2003). Partly to raise
additional revenue to try to reduce deficits, the first President Bush reneged on his
campaign promise of “no new taxes” and agreed to a compromise tax proposal in 1990
that raised the top marginal tax bracket to 31%. President Clinton reinstated additional
progressivity in 1993 by creating the 36% and 39.6% individual tax brackets. In 1993,
the corporate tax rate was increased slightly to 35%. These changes produced an increase
in the progressivity of federal taxes.
The most recent important tax legislation was the $1.35 trillion Bush tax cut passed in
2001. The major provisions of this act include lowering individual income tax rates
across-the-board, scheduling repeal of the estate tax in 2010, and increasing the amount
employees can contribute under various programs for retirement purposes. Many of the
bill’s provisions are “back-loaded,” meaning the tax reductions are phased in over time
with most of the tax reduction occurring in the future. For example, the top marginal
bracket fell from 39.6% in 2001 to 38.6% in 2002 but eventually fell to 35.0% in 2006.
13
This is based on the “on-budget” calculations. The on-budget accounting excludes the Social Security
trust fund as well as other minor balances.
15
The Bush tax cut reduced the overall progressiveness of the federal income tax as highincome taxpayers received a disproportionate share of the total cuts (CTJ, 2001).
A somewhat smaller tax cut was passed in 2003 that, among other changes, accelerated
scheduled tax rate decreases and lowered the maximum tax rate on capital gains and
dividends. Most recently, the 2009 American Recovery and Reinvestment Act of 2009
instituted or expanded various tax credits such as a payroll tax credit of $400 per worker
and an expanded tax credit for college tuition.
IV. Summary Data of U.S. Tax History
Until quite recently, tax collections have tended to increase over time; paralleling the
increase in the size of the federal government. We see in Figure 1 that federal tax
revenues have grown considerably during the 20th century, even after adjusting for
inflation. A large increase in federal tax collections occurred during World War II, with
relatively consistent growth after about 1960. However, notice occasional declines in
federal tax revenues, due either to recessions or to major tax code changes. The growth
Figure 1. Tax Collections, 1913-2009 (All values in 2009 dollars) 14
14
Data on state and local taxes are incomplete and/or inconsistent prior to 1932. All data from various
editions of the Statistical Abstract of the United States and U.S. Census Bureau (1960).
16
of state and local tax collections, by comparison, has been steadier with less fluctuation.
The reason is that state and local tax revenues are derived primarily from property and
sales taxes, which vary less than income (particularly corporate income) during business
cycles.
Another way to illustrate the growth of federal taxation is to measure it relative to
national economic output. In Figure 2 we plot federal and state and local tax collections
as a share of GDP. Three facts are evident from Figure 2. First, total tax collections have
generally grown as a percentage of GDP over the 20th century. Again, the largest leap
occurred during World War II, but some additional growth is evident after the war as
well. The second fact is that federal tax revenues now substantially exceed state and
local tax revenues. While World War II solidified the federal government as the primary
tax collector in the U.S., note that this trend began prior to the war. Finally, note the
decline in federal taxes as a percentage of GDP since 2000. This is a result of both
economic recessions and declines in federal tax rates. In fact, federal taxes as a
percentage of GDP were lower in 2009 than in any year since the 1940s.
Figure 2. Tax Collections as a Percentage of GDP, 1913-2009 15
As federal revenues grew during the 20th century, the composition of taxation has
changed considerably. We see in Figure 3 that at the beginning of the century federal
taxation was dominated by excise taxes. Except for a revival of excise taxes during the
Depression Era, their importance has generally diminished over time. Corporate taxes
became the most significant source of federal revenues for the period 1918-1932. After a
period of higher corporate taxes during World War II, corporate taxes have generally
diminished in significance relative to other forms of federal taxation. Personal income
15
Data on state and local taxes are incomplete and/or inconsistent prior to 1932.
17
taxes became the largest source of federal revenues in 1944 and have remained so. Since
World War II, income taxes have consistently supplied between 40-50% of federal
revenues. Since about 1950, social insurance taxes have increased their share of federal
revenues from about 10% up to nearly 40%. In fact, social insurance taxes may soon
exceed personal income taxes as the largest source of federal revenues.
Figure 3. Composition of Federal Taxes, 1913-2009
The composition of state and local taxes, with its increased reliance on sales and property
taxes, differs from the composition of federal taxes. Of course, each state has a different
tax system – some states have no income and/or sales taxes, and tax rates can differ
significantly across states. In this module, we combine tax data for all states rather than
presenting a state-by-state analysis. Figure 4 presents the composition of state and local
taxes over the period 1945-2009. The two major trends that are evident are a decline in
the importance of property taxes and an increase in the importance of personal income
taxes except for a recent reversal of these trends in the last few years. While property
taxes were the primary source of state and local revenues until the 1970s, sales taxes
became the major source of revenues until 2008, when property taxes again became the
major revenue source.
18
Figure 4. Composition of State and Local Taxation, 1945-2009
V. THE DISTRIBUTION OF TAXES IN THE UNITED STATES
Tax Incidence Analysis
There are basically two ways to analyze how the tax burden is distributed. The easiest
way is to measure the taxes directly paid by entities, such as households or businesses,
classified according to criteria such as household income, business profit levels, etc.
These data can be obtained directly from aggregate tax return data published by the IRS
and from reports from other government agencies. This approach considers only who
actually pays the tax to the government. Thus, it would allocate corporate taxes to
corporations, excise taxes to manufacturers, sales taxes to consumers, etc.
The second approach, called tax incidence analysis, is more complex yet more
meaningful. While taxes are paid by various entities other than individuals, such as
corporations, partnerships, and public service organizations, the burden of all taxes
ultimately fall on people. The final incidence of taxation is contingent upon how a
specific tax translates into changes in prices and changes in economic behavior among
consumers and businesses:
“Tax incidence is the study of who bears the economic burden of a tax. More
generally, it is the positive analysis of the impact of taxes on the distribution of
welfare within a society. It begins with the very basic insight that the person who
19
has the legal obligation to make a tax payment may not be the person whose
welfare is reduced by the existence of the tax. The statutory incidence of a tax
refers to the distribution of those legal tax payments – based on the statutory
obligation to remit taxes to the government. ...
Economic incidence differs from statutory incidence because of changes in
behavior and consequent changes in equilibrium prices. Consumers buy less of a
taxed product, so firms produce less and buy fewer inputs – which changes the net
price or return to each input. Thus the job of the incidence analyst is to determine
how those other prices change, and how those price changes affect different
groups of individuals.” (Metcalf and Fullerton, 2002, p. 1)
Tax incidence analysis has produced a number of generally accepted conclusions
regarding the burden of different tax mechanisms. Remember, for example, that the
payroll tax on paper is split equally between employer and employee:
“So, who really pays the payroll tax? Is the payroll tax reflected in reduced
profits for the employer or in reduced wages for the worker? ... there is generally
universal agreement that the real burden of the tax falls almost entirely on the
worker. Basically, an employer will only hire a worker if the cost to the employer
of hiring that worker is no more than the value that worker can add. So, a worker
is paid roughly what he or she adds to the value of production, minus the payroll
tax; in effect, the whole tax is deducted from wages. ... to repeat, this is not a
controversial view; it is the view of the vast majority of analysts...” (Krugman,
2001, p. 43)
The most common assumption made regarding the allocation of corporate taxes is that
the burden of these taxes falls almost exclusively on the owners of capital investments.
Given the mobility of capital, the burden is not limited to owners of corporate capital but
extends to owners of all capital. 16 This result is primarily a theoretical finding – in reality
some portion of the corporate tax burden likely falls on workers (through lower wages)
and consumers (through higher prices).
Excise taxes, although directly paid by manufacturers, are generally attributed entirely to
consumers according to their consumption patterns. 17 This result is based on an
assumption of perfect competition in the affected industries. Real-world markets,
however, are not perfectly competitive. The actual incidence of excise taxes will depend
on the degree of competition in an industry. For example, imperfectly competitive
industries with upward-sloping supply curves imply that prices increase by less than the
tax and that a portion of excise taxes is borne by businesses. 18
16
See summary in Metcalf and Fullerton (2002).
See CBO (2008).
18
See Fullerton and Metcalf (2002) for a summary of incidence assumptions and analyses for different
types of taxes.
17
20
The burden of sales taxes is generally assumed to fall directly on consumers who buy the
taxed goods and services. Again, this is a simplifying assumption – in reality some
portion of sales taxes filters to corporate owners, other capital owners, and workers.
Personal income taxes paid by households are directly attributed to those households
paying the tax. Estate tax burdens fall on the heirs paying the tax. Finally, property tax
burdens are generally assumed to fall on property owners although the burden can be
passed on renters (some analysts attribute property taxes more broadly to owners of
capital).
So, for several types of tax mechanisms (personal income, sales, excise, and estate taxes),
data on direct tax payments is analogous to tax incidence. However, for other taxes
(payroll, corporate, and to a lesser extent property taxes) the direct data on tax payments
will differ from the ultimate burden of the tax.
Using Effective Tax Rate Data to Determine Tax Progressivity
As mentioned before, a tax is progressive if the percentage of income a person pays for
the tax increases as income increases. Thus, we can determine whether a tax is
progressive or regressive by looking at a table showing the effective tax rates for a
particular tax for people in different income categories. If effective tax rates increase
(decrease) with increasing income, then the tax is progressive (regressive). Table 2
shows the percentage of income people in each adjusted gross income (AGI) category
paid in federal income taxes in 2008, the most recent data available. We see that
effective tax rates for the federal income tax tend to increase with increasing income
(although not always). For taxpayers making less than $100,000 AGI per year, the
Table 2. Distribution of Federal Income Taxes, 2008
AGI Category
Percent of
Returns
16.7
Average
AGI
$5,099
Average
Income Taxes
$177
Effective Income
Tax Rate
3.5%
16.0
$14,927
$513
3.4%
13.0
$24,798
$1,421
5.7%
18.0
$39,126
$2,808
7.2%
13.5
$61,470
$5,246
8.5%
8.2
9.7
2.4
0.4
$86,421
$133,208
$285,735
$679,576
$8,037
$16,903
$55,984
$163,513
9.3%
12.7%
19.6%
24.1%
0.2
$3,349,101
$780,550
23.3%
$1-$10,000
$10,000-$20,000
$20,000-$30,000
$30,000-$50,000
$50,000-$75,000
$75,000 - $100,000
$100,000-$200,000
$200,000-$500,000
$500,000$1,000,000
More than
$1,000,000
21
effective federal income tax rate averages less than 10% of income. For those making
more than $200,000 per year, the federal income tax averages more than 20% of income.
The federal income tax is clearly progressive because those with higher incomes
generally pay a larger share of their income for the tax. For a regressive tax, effective tax
rates tend to decrease as income increases. If effective tax rates are constant at different
income levels, then a tax is proportional.
Looking at effective tax rates by income categories can normally determine whether a tax
is progressive or regressive. However, there may be some cases where effective tax rates
do not follow a consistent pattern across income levels. For example, suppose that
effective taxes first increase but then decrease as we move up the income spectrum.
Another limitation with data on effective tax rates is that this approach does not tell us the
degree of progressivity or regressivity. We might not be able to determine whether one
tax is more progressive than another or whether a particular tax becomes more or less
progressive over time.
Researchers have come up with several tax indices that measure the progressivity of a tax
as a single number. These indices allow direct comparisons across different tax types and
across time. The most common tax progressivity index is discussed in Box 2.
Effective Tax Rates in the United States
Data on the distribution of taxes in the U.S. are available from several sources. The
government sources that publish data on tax distribution include the Internal Revenue
Service (IRS), the Joint Committee on Taxation (JCT), the Congressional Budget Office
(CBO), and the Office of Tax Analysis within the U.S. Treasury. The IRS data are the
most detailed but focus on federal income and estate taxes. The IRS publishes data on
corporate taxes but does not conduct tax incidence analysis. The JCT occasionally
conducts tax incidence analyses but only on the federal income tax, payroll taxes, and
federal excise taxes. The CBO adds the incidence of federal corporate taxes to their
analyses but still omits the federal estate tax and all state and local taxes.
The only source for tax incidence data for all taxes in the U.S. is Citizens for Tax Justice
(CTJ), a non-profit organization. CTJ uses data from government sources but has
developed its own models of tax incidence. Comparison of tax progressivity data from
CTJ with data from the federal sources listed above indicates that their results are
generally similar to the government’s results and not biased in either direction (Roach,
2003).
22
BOX 2. MEASURING TAX PROGRESSIVITY – THE SUITS INDEX
The Suits Index, developed by Daniel Suits in the 1970s (Suits, 1977), calculates a single
number that measures tax progressivity. The approach basically compares the cumulative
share of income received by taxpayers, order from lowest to highest, to their cumulative
share of taxes paid. For a progressive (regressive) tax, the share of taxes paid will tend to
be less (more) than the share of income as we move up the income spectrum. Other tax
progressivity indices have been developed but the Suits Index remains the most widely
used approach (Anderson, et al., 2003).
While the calculation details are not presented here, the Suits Index is a number ranging
between –1 and +1. A negative Suits Index means that the tax is regressive while a
positive index indicates a progressive tax (with a value of zero for a proportional tax).
The Suits Index can be used to compare the degree of progressivity of different tax types
as well as determine whether a tax becomes more or less progressive over time.
The Suits Index has been used to estimate the progressivity of different tax types in the
U.S. for 2007 (Roach, 2010). Table 2.1 shows that the U.S. tax system contains a mixture
of progressive and regressive taxes. The federal estate tax is the most progressive tax
while the federal corporate and income taxes are also progressive. On the other hand,
federal excise taxes are the most regressive. Federal social insurance taxes and overall
state and local taxes are also regressive. When all federal taxes are considered, the Suits
Index of +0.18 indicates that federal taxation is progressive. The entire U.S. tax system is
also progressive, but the recent Suits Indices of +0.05 and +0.06 are closer to a value of
zero (a proportional tax) than just the federal tax system.
Table 2.1. Suits Index Estimates of the U.S. Tax System, 2007, by Tax Type1
Tax Type
Federal Income
Federal Social Insurance
Federal Excise
Federal Corporate
Federal Estate and Gift
State and Local
Total Federal
All U.S. Taxes (2001 data)
All U.S. Taxes (2004 data)
All U.S. Taxes (2009 data)
Suits Index
+0.42
-0.20
-0.31
+0.51
+0.63
-0.12
+0.18
+0.09
+0.05
+0.06
__________________
1 – The Suits Index for the federal estate and gift tax is based upon 2008 data.
23
Table 3 presents the tax distribution data from CTJ for 2009. We see that while the
federal tax system is progressive, the state and local tax system is, on average, regressive.
Overall, the tax system in the U.S. is progressive, although the rate of progressivity levels
off at upper income levels and actually reverses at the highest income level in Table 3.
Table 3. Effective Tax Rates, 2009 19
Effective Tax Rates
Income
Group
Average
Income
Lowest 20%
$12,400
Second 20%
$25,000
Third 20%
$40,000
Fourth 20%
$66,000
Next 10%
$100,000
Next 5%
$141,000
Next 4%
$245,000
Top 1%
$1,328,000
ALL
$68,900
Federal Taxes
3.6%
8.7%
13.9%
17.2%
19.0%
20.4%
21.3%
22.3%
18.0%
State & Local Taxes
12.4%
11.8%
11.3%
11.3%
11.1%
10.8%
10.2%
8.4%
10.6%
All Taxes
16.9%
20.5%
25.3%
28.5%
30.2%
31.2%
31.6%
30.8%
28.6%
Tax Progressivity over Time
Consistent data are generally not available to determine how the entire tax burden in the
U.S. has shifted over time. Most analyses are limited to one, or a few, tax types. Further,
interest groups can interpret the available data to support their particular agendas. For an
illustration about how the same tax data can be used to support different claims, see Box
3.
Analysis of tax progressivity over time indicates that the federal tax system is about as
progressive now as it was in the late 1970s (Roach, 2010). The progressivity of the
federal tax system declined during the early 1980s, rose in 1987 (the year following the
passage of the Tax Reform Act of 1986), either remained stable or rose slightly up to the
mid-200s, and decreased slightly since the mid-200s.
Complete data on the distribution of state and local taxes are available from Citizens for
Tax Justice for 1995, 2002, 2007, and 2009, with Suits Indices of -0.11, -0.07, -0.12, and
-0.07 respectively. Thus the available data suggest no obvious overall trend in the
regressivity of state and local taxes. The unavailability of consistent data on the
distribution of state and local taxes makes determination of the trends in the overall U.S.
19
Data from CTJ, 2010.
24
tax system difficult to determine. As Table 2.1 indicated, total taxes declined in
progressivity from 2001 to 2004, and then stayed about the same from 2004 to 2009.
BOX 3. INTERPRETING TAX PROGRESSIVITY DATA
Has the federal income tax burden on the very wealthy been increasing or decreasing in
recent decades? Data published by the CBO reveals that the percent of federal income
taxes paid by the highest-income taxpayers has increased steady over the past few
decades. In 1979, the top 1% of taxpayers paid about 18.3% of all federal income taxes.
In 2007, the top 1% of taxpayers paid over 39.5%. Clearly, these data suggest that the
federal income tax has become much more progressive since 1979.
However, these statistics represent an incomplete analysis. Specifically, it fails to
consider how the proportion of income accruing to the top 1% has changed over the same
time period. The increasing tax share paid by high-income taxpayers may be a function of
an increase in income, rather than a change in the tax system. In other words, if the share
of all income received by the top 1% increased, we would naturally expect that their share
of taxes paid would also increase without any changes in the underlying progressivity of
the tax system. Income statistics indicate that the share of income going to the top 1% of
taxpayers has also increased significantly since 1979. The top 1% of taxpayers received
less than 9.2% of income in 1979 but more than 19.4% in 2007. Based on this fact alone,
we would expect the top 1% to be paying a greater share of all federal income taxes.
So, has the federal income tax burden on the top 1% increased or decreased since 1979?
We can combine the tax and income data for a more complete analysis. The share of
income going to the top 1% increased by a factor of 2.1 between 1979 and 2007.
Meanwhile, their share of taxes paid has increased by a factor of 2.2. This suggests that
the share of taxes paid by the top 1% has risen by about as much as much as their share of
income – indicating a relatively stable degree of tax progressivity in the federal income
tax – a dramatically different conclusion had we only considered data on tax shares!
25
References
Brownlee, W. Elliot. 1996. Federal Taxation in America. University of Cambridge Press:
Cambridge.
Chaptman, Dennis. 2003 “States' Budget Troubles Worsening, Report Finds,” Milwaukee
Journal Sentinel, Feb. 5, 2003.
Citizens for Tax Justice, Institute on Taxation & Economic Policy. 2003a. “Who Pays? A
Distributional Analysis of the Tax Systems in All 50 States, 2nd Edition,” January 2003,
http://www.itepnet.org/wp2000/text.pdf.
Citizens for Tax Justice. 2010. “All Americans Pay Taxes,” April 15, 2010.
http://www.ctj.org/pdf/taxday2010.pdf.
Citizens for Tax Justice. 2003b. “Final Tax Plan Tilts Even More Towards Richest,”
June 5, 2003 press release, http://www.ctj.org/pdf/sen0522.pdf.
Citizens for Tax Justice. 2002. “White House Reveals Nation’s Biggest Problems: The
Very Rich Don’t Have Enough Money & Workers Don’t Pay Enough in Taxes,”
December 16, 2002 press release, http://www.ctj.org/pdf/flat1202.pdf.
Citizens for Tax Justice. 2001. “Final Version of Bush Tax Plan Keeps High-End Tax
Cuts, Adds to Long-Term Cost,” May 26, 2001 press release,
http://www.ctj.org/html/gwbfinal.htm.
Congressional Budget Office, “Effective Federal Tax Rates, 2005,” December 2008.
Fullerton, Don, and Gilbert E. Metcalf, 2002. “Tax Incidence,” National Bureau of
Economic Research Working Paper 8829.
IRS (Internal Revenue Service). Various Years. Statistics of Income, Individual Income
Tax Returns. Washington, D.C.
IRS (Internal Revenue Service). 2002. “Personal Exemptions and Individual Income Tax
Rates, 1913-2002.” Statistics of Income Bulletin Data Release, June 2002.
Johnson, Charles M. 2002. “Finding their Balance?” Missoulian, December 8, 2002.Joint
Committee on Taxation. 2001. “Updated Distribution of Certain Federal Tax Liabilities
by Income Class for Calendar Year 2001,” JCX-65-01.Krugman, Paul. 2002. “For
Richer,” The New York Times, October 20, 2002, section 6, page 62.
Krugman, Paul. 2001. Fuzzy Math: The Essential Guide to the Bush Tax Cut Plan, W.W.
Norton & Company: New York.
26
Martin, Cathie J. 1991. Shifting the Burden: The Struggle over Growth and Corporate
Taxation. The University of Chicago Press: Chicago.
Metcalf, Gilbert E. and Don Fullerton. 2002. “The Distribution of Tax Burdens: An
Introduction,” National Bureau of Economic Research Working Paper 8978.
OECD (Organisation for Economic Co-operation and Development). 2010. “More
Information on Environmentally Related Taxes, Fees and Charges,”
http://www2.oecd.org/ecoinst/queries/index.htm.
OMB (Office of Management and Budget). 2003. “Historical Tables, Budget of the
United States Government, Fiscal Year 2004.” Washington, D.C.
Peterson, Wallace C. 1999. The Social Security Primer: What Every Citizen Should
Know. M.E. Sharpe: Armonk, NY.
Petska, Tom, and Mike Strudler. 1999. “The Distribution of Individual Income and
Taxes: A New Look at an Old Issue.” Paper presented at the 1999 American Economics
Association conference, January 3-5, 1999, New York,
http://www.irs.gov/taxstats/article/0,,id=112309,00.html.
Roach, Brian. 2010. “Progressive and Regressive Taxation in the United States: Who’s
Really Paying (and Not Paying) their Fair Share?” Global Development And
Environment working paper 10-07, December 2010.
Roach, Brian. 2003. “Progressive and Regressive Taxation in the United States: Who’s
Really Paying (and Not Paying) their Fair Share?” Global Development And
Environment working paper 03-10, October 2003.
Skidmore, Max J. 1999. Social Security and Its Enemies. Westview Press: Boulder, CO.
Tax Policy Center. 2010. “Wealth Transfer Taxes: Who Pays the Estate Tax?” The Tax
Policy Briefing Book, http://www.taxpolicycenter.org/briefing-book/keyelements/estate/who.cfm.
Tax Policy Center. 2008. “Estate Tax Returns and Liability Under Current Law and
Various Reform Proposals, 2008-2018,” Table T08-0264, October 20, 2008.
Tempalski, Jerry. 1998. “Revenue Effects of Major Tax Bills.” Office of Tax Analysis
Working Paper 81, December 1998.
U.S. Census Bureau. 2003. “Historical Income Tables - Income Equality, Table IE-1,”
http://www.census.gov/hhes/income/histinc/ie1.html.
U.S. Census Bureau. 2010. The 2010 Statistical Abstract of the United States.
Washington, D.C.
27
U.S. Census Bureau. Various Years. Statistical Abstract of the United States.
Washington, D.C.
U.S. Census Bureau. 1960. Historical Statistics of the United States, Colonial Times to
1957. Washington, D.C.
28
MODULE SUMMARY
•
The overall tax system in the United States is progressive, meaning that effective
tax rates tend to increase as income increases. Progressive taxation is based on
the view that higher-income taxpayers can pay higher tax rates without having to
forego life’s basic necessities. Progressive taxation can also redress economic
inequalities and collect a given level of revenue while maintaining the maximum
level of economic growth.
•
The federal income tax is the most complicated and debated tax in the U.S. tax
system. The federal income tax is progressive, with increasing marginal tax rates.
Federal income taxes are calculated based on taxable income, which is less than
total income because various exemptions and deductions are allowed.
•
The federal tax system in the U.S. also includes social insurance, corporate,
excise, estate, and gifts taxes. Social insurance and excise taxes are regressive
while corporate, estate, and gift taxes are progressive. The U.S. tax system also
includes state and local taxes, primarily sales, income, and property taxes.
•
Nearly 70% of the taxes levied in the U.S. are collected at the federal level. The
largest federal tax is the income tax, closely followed by social insurance taxes.
The most significant non-federal tax is property taxes, followed by sales and
income taxes.
•
Up until the early 1900s, the U.S. tax system primarily relied on excise taxes and
tariffs for public revenues. The 16th Amendment, ratified in 1913, created the
legal basis for federal income taxation, which up to that point had been prohibited
under the Constitution.
•
Both World Wars led to significant changes in the structure and overall magnitude
of taxes in the U.S. By the end of World War II, U.S. taxes were broad-based but
progressive and dominated by federal-level taxation.
•
Tax cuts passed during the Reagan Administration in the 1980s were based on the
theory that lower tax rates would spur economic growth, leading to a net increase
in tax revenues. This theory was not supported by the evidence, eventually
leading to tax increases in the early 1990s. The Bush tax cuts passed in 2001 and
2003 have made federal taxes less progressive.
•
Tax revenues in the U.S. increased dramatically during the 20th century, even after
adjusting for inflation. When measured as a percentage of GDP, tax revenues
grew significantly during World War II, grew at a slower pace afterwards, and
leveled off recently at around 30% of GDP.
29
•
Measuring the distribution of taxes requires tax incidence analysis, which
determines the ultimate burden of a tax on taxpayers. Tax incidence analysis
generally concludes that social insurance taxes fall on workers, corporate taxes
fall on the owners of capital, excise taxes fall on consumers, and property taxes
are passed on to renters.
•
Effective tax rates measured by income level can be used to determine whether a
particular tax is progressive or regressive. While the U.S. tax system contains
both progressive and regressive taxes, the overall system is progressive. Recent
data suggest that federal taxes are becoming less progressive while state and local
taxes are becoming more regressive.
30
DISCUSSION QUESTIONS
1. Comment on the following statement: “The fairest type of tax system is one in
which everyone pays the same rate of taxation, regardless of income.” Do you
agree or disagree with the statement? Why?
2. Suppose you could set the overall effective tax rates across different levels of
income. What do you think should be the appropriate effective tax rates for a
household of four (two adults and two children) with an income of $25,000? An
income of $60,000? An income of $100,000? An income of $500,000? Is the
system you devise more or less progressive than the tax system currently in place
in the U.S.? How does your system compare with others in your class?
3. The U.S. tax system is currently comprised of many different types of taxes
(income, social insurance, corporate, sales, property, etc.). What reasons could be
given to support the use of many different tax types in a nation? Do you think
that a nation’s tax system should be comprised of many different types of taxes or
just one type of tax? If you had to choose just one type of tax to levy in a nation,
what type of tax would you choose? Why?
4. Comment on the following statement: “As long as a tax cut reduces taxes for
everyone, then everyone will be better off as a result of the tax cut.” Do you
agree with this statement? Why or why not?
5. Using the Internet or other sources, look up information about basic structure of
the tax system in place in a country other than the United States. What
differences are evident in that country’s tax system? Do you think that country
has a more or less progressive tax system? Which nation’s tax system is
preferable to you? Why?
6. Locate a recent news story about a proposal for a change to the tax system, either
at the federal or state level. Summarize the proposed change. Would the change
increase or decrease tax progressivity? Who would benefit most from the
proposal? Who would be hurt the most from the proposal? Do you support the
proposal? Why or why not?
31
ADDITIONAL RESOURCES
•
All the federal government agencies that work on tax issues maintain web sites that
provide tax data and reports. The IRS’s Statistics of Income Bulletins, published four
times a year, can be found dating back to 1998 at
http://www.irs.gov/taxstats/article/0,,id=117514,00.html. The SOI Bulletins provide
data analysis of primarily individual and corporate taxes. Publications produced by
the Joint Committee on Taxation can be found at
http://www.jct.gov/publications.html. Publications by the Congressional Budget
Office related to tax issues, going as far back as the 1970s, are available at
http://www.cbo.gov/publications/bysubject.cfm?cat=33. Finally, tax analysis by the U.S.
Treasury Department, only dating back to 2001, can be found at
http://www.treasury.gov/resource-center/tax-policy/Pages/default.aspx.
•
A large amount of tax-related data is published annually in the Statistical Abstract of
the United States. Each year’s edition includes a chapter on state and local
government finances and another chapter on federal government finances. The
Census Bureau has recently added select historical editions of the Statistical Abstract
dating as far back as 1878, although online availability is more complete for the first
half of the 20th century than the latter half of the century (see
http://www.census.gov/compendia/statab).
•
Citizens for Tax Justice publishes many other tax analyses besides those referenced in
this module. Their web site is www.ctj.org. Two other non-profit organizations that
conduct tax analysis are the Tax Policy Center, a joint venture of the Urban Institute
and Brookings Institution, and the Center for Budget and Policy Priorities. The Tax
Policy Center (www.taxpolicycenter.org) publishes several reports each month on a
wide range of tax issues, including distributional impacts and public budget
implications. The CBPP (www.cbpp.org) research focuses on “fiscal policy and
public programs that affect low- and moderate-income families and individuals.”
Similar to the Tax Policy Center, the CBPP conducts distributional analyses of
current tax proposals.
•
For an opposing view on tax issues, the Tax Foundation (www.taxfoundation.org)
publishes tax analyses that generally support lower overall taxes and conclude that the
distributional impacts of recent tax cuts are fair. A similar organization, with a more
activist agenda, is Americans for Tax Reform (www.atr.org).
32
KEY TERMS AND CONCEPTS
Ability-to-pay principle: the idea that higher-income households and individuals should
pay higher tax rates than lower-income taxpayers because they are more able to bear the
tax without foregoing life’s basic necessities.
Adjusted gross income (AGI): the total income of a household or individual minus
certain out-of-pocket expenses such as retirement account contributions, student loan
interest, tuition, and other allowable subtractions. AGI is calculated on one’s federal tax
return.
Effective tax rate: one’s total taxes paid divided by some measure of income, such as
total income, adjusted gross income, or taxable income.
Environmental taxes: taxes levied on a good or service based on the environmental
impact of its production or consumption.
Estate taxes: taxes on the transfer of large estates to beneficiaries.
Excise taxes: taxes on the production, sale, or use of a particular commodity.
Exemptions: an amount excluded from taxation based on the number of tax filers and
dependents.
Gift taxes: taxes levied on large gifts; gift taxes are designed to prevent taxpayers from
avoiding estate taxes by giving away their assets while alive.
Itemized deductions: certain expenses excluded from federal taxation, including
mortgage interest, state taxes, gifts to charity, real estate taxes, and major medical
expenses. A taxpayer is allowed to deduct either the standard or itemized deduction,
whichever is larger.
Marginal propensity to consume: the proportion of a marginal income increase that is
spent on consumption goods and services, as opposed to invested or saved.
Marginal tax rates: a tax system where a single taxpayer can pay different tax rates on
successive portions of income.
National consumption tax: a federal-level tax paid on the dollar amount a household or
individual spends each year on goods and services, calculated using either a single tax
rate or marginal tax rates.
National sales tax: a federal-level tax paid on the purchase of certain goods and services,
calculated as a percentage of the selling price.
33
Perfect competition: an idealized market structure characterized by many informed
small firms with no market power selling undifferentiated products and with complete
freedom to enter or exit the market.
Progressive tax: a tax in which the percentage of income one pays for the tax increases
as one’s income increases.
Proportional tax: a tax in which the percentage of income one pays for the tax is
constant regardless of income level.
Regressive tax: a tax in which the percentage of income one pays for the tax decreases as
one’s income increases.
Social insurance taxes: taxes paid to support social insurance programs such as Social
Security, Medicare, and Medicaid.
Standard deduction: a fixed amount of income excluded from federal taxation based on
filing status (single, married, etc.). A taxpayer is allowed to deduct either the standard or
itemized deduction, whichever is larger.
Suits index: an index developed by Daniel Suits in the 1970s to measure the overall
progressivity or regressivity of a tax.
Tariffs: taxes levied on imported goods and services.
Tax incidence analysis: estimating the ultimate financial burden of various taxes on
different categories of households by tracing a tax’s impact on market prices and the
economic behavior of consumers and businesses.
Taxable income: the amount of income used as the basis for determine one’s income
taxes. For federal income taxes, taxable income is equal to adjusted gross income (AGI)
minus allowable deductions and exemptions.
Total income: the total income a household or individual receives from all sources
Value-added tax: a tax levied at each stage in the production process of a good or
service.
Wealth taxes: taxes levied on the value of one’s assets such as real estate, investments,
cash, and other personal property.
34
| Use the source provided only.
What is the history of taxes in the United States?
Taxes in the United States:
History, Fairness, and
Current Political Issues
by Brian Roach
A GDAE Teaching Module
on Social and Environmental
Issues in Economics
Global Development And Environment Institute
Tufts University
Medford, MA 02155
http://ase.tufts.edu/gdae
Copyright © 2010 Global Development And Environment Institute, Tufts University.
Copyright release is hereby granted for instructors to copy this module for instructional purposes.
Students may also download the module directly from http://ase.tufts.edu/gdae.
Comments and feedback from course use are welcomed:
Tufts University Global Development And Environment Institute
Tufts University
Medford, MA 02155
http://ase.tufts.edu/gdae
E-mail: [email protected]
I. INTRODUCTION
“The hardest thing in the world to understand is income tax!” – Albert Einstein
Taxes are complicated. The U.S. federal tax code contains over three million words –
about 6,000 pages. A casual browsing of the tax code’s table of contents offers a glimpse
into the vast complexity of federal taxation. Entire sections of the tax code apply
specifically to the taxation of vaccines (Sec. 4131-4132), shipowners' mutual protection
and indemnity associations (Sec. 526), specially sweetened natural wines (Sec. 5385),
and life insurance companies (Sec. 801-818). Annual changes to the tax code imply that
taxes will continue to become more complex even as politicians tout tax simplification.
Taxes levied by other jurisdictions, such as states and cities, add further complexity to
taxation in the U.S. Americans spend billions of hours each year working on their taxes,
not to mention the costs of accountants and tax preparers.
Fortunately, one needn’t comprehend the imposing complexity of the tax code to
understand the crucial role of taxes in American society. Taxation is an important, but
commonly neglected, topic for students of economics, political science, and other
disciplines. Tax policy has important economic consequences, both for the national
economy and for particular groups within the economy. Tax policies are often designed
with the intention of stimulating economic growth – although economists differ
drastically about which policies are most effective at fostering growth. Taxes can create
incentives promoting desirable behavior and disincentives for unwanted behavior.
Taxation provides a means to redistribute economic resources towards those with low
incomes or special needs. Taxes provide the revenue needed for critical public services
such as social security, health care, national defense, and education.
Taxation is as much of a political issue as an economic issue. Political leaders have used
tax policy to promote their agendas by initiating various tax reforms: decreasing (or
increasing) tax rates, changing the definition of taxable income, creating new taxes on
specific products, etc. Of course, no one particularly wants to pay taxes. Specific
groups, such as small-business owners, farmers, or retired individuals, exert significant
political effort to reduce their share of the tax burden. The voluminous tax code is
packed with rules that benefit a certain group of taxpayers while inevitably shifting more
of the burden to others. Tax policy clearly reflects the expression of power in the U.S. –
those without power or favor are left paying more in taxes while others reap the benefits
of lower taxes because of their political influence. Broad attempts to reform the tax
system have produced dramatic and sudden shifts in tax policy, generally motivated by
political factors rather than sound economic theory. For example, the top marginal
federal tax bracket on individual income in the U.S. dropped precipitously from 70% to
28% during the 1980s. Tax policy has clearly been used to promote political, as well as
economic, agendas.
This module is intended to provide a basic understanding of the economic, political, and
social context of the entire U.S. tax system. When most people think about taxes, they
1
tend to think only of the federal income tax. However, looking solely at the federal
income tax would miss several important issues. Perhaps most importantly, the federal
income tax is not the largest tax bill to most Americans. We’ll see that the largest tax for
most Americans is federal social insurance taxation. Also, the federal income tax is one
of the most progressive taxes in the U.S. system. When all taxes are considered, the U.S.
tax system is much less progressive. You may be surprised to find out how many taxes in
the U.S. are actually regressive – hitting low-income households at a disproportionately
high rate.
This module is divided into three major sections. First, some basic terms will be defined
and discussed, including tax progressivity and the differences between several types of
taxes. Second, a brief overview of tax history in the United States will be presented.
Third, data on tax trends will be used to illustrate the changing nature of taxation with a
focus on the overall progressivity of the entire tax system.
II. THE STRUCTURE OF TAXATION IN THE UNITED STATES
Tax Progressivity
The overall system of taxation in the United States is progressive. By a progressive tax
system, we mean that the percentage of income an individual (or household) pays in taxes
tends to increase with increasing income. Not only do those with higher incomes pay
more in total taxes, they pay a higher rate of taxes. This is the essence of a progressive
tax system. For example, a person making $100,000 in a year might pay 25% of their
income in taxes ($25,000 in taxes), while someone with an income of $30,000 might only
pay a 10% tax rate ($3,000 in taxes).
A tax system may also be regressive or proportional. A regressive tax system is one
where the proportion of income paid in taxes tends to decrease as one’s income increases.
A proportional tax system simply means that everyone pays the same tax rate regardless
of income. A particular tax system may display elements of more than one approach.
Consider a hypothetical tax system where one pays a proportional, or flat 1 , rate on
income below a certain dollar amount and then progressively increasing rates above that
dollar amount. Also, within an overall tax system, some particular taxes might be
progressive while other taxes are regressive. We’ll see later on that this the case in the
United States.
The Reasons for Progressive Taxation
The overall tax system of the United States, and in most other countries, is progressive
for a number of reasons. A progressive tax embodies the concept that those with high
incomes should pay more of their income in taxes because of their greater ability to pay
1
This is not exactly the same concept embodied in current proposals for a “flat tax” in the U.S. These
proposals would set just one tax rate but would exclude a given amount of income from taxation. Thus, the
flat tax proposals would retain a small degree of progressivity.
2
without critical sacrifices. By paying a tax, any household must forego an equivalent
amount of spending on goods, services, or investments. For a high-income household,
these foregone opportunities might include a second home, an expensive vehicle, or a
purchase of corporate stock. A low-income household, by comparison, might have to
forego basic medical care, post-secondary education, or vehicle safety repairs. As
income increases, the opportunity costs of paying taxes tend to be associated more with
luxuries rather than basic necessities. The ability-to-pay principle recognizes that a flat
(or regressive) tax rate would impose a larger burden, in terms of foregone necessities, on
low-income households as compared to high-income households.
A progressive tax system is also a mechanism to addresses economic inequalities in a
society. To evaluate a tax system’s impact on inequality, one must consider both the
distribution of taxes paid and the distribution of the benefits derived from tax revenue. If
the benefits of programs funded by taxation primarily benefit low-income households
while high-income households pay the majority of taxes, then the tax system effectively
operates as a transfer mechanism. Increasing the progressivity of the tax system or
altering the distribution of benefits allows greater redistribution of economic resources.
We’ll mainly focus on tax payments in this module but you should also be aware that the
benefits of public expenditures are not evenly distributed throughout society. 2
There is also an economic argument for a progressive tax system – it may yield a given
level of public revenue with the least economic impact. To see why, consider how
households with different levels of income would respond to a $100 tax cut. A lowincome household would tend to quickly spend the entire amount on needed goods and
services – injecting $100 of increased demand into the economy. By comparison, a highincome household might only spend a fraction on goods and services, choosing to save or
invest a portion of the money. The money that a high-income household saves or invests
does not add to the overall level of effective demand in an economy. 3 In economic
terms, we say that the marginal propensity to consume tends to decrease as income
increases. So, by collecting proportionally more taxes from high-income households we
tend to maintain a higher level of effective demand and more economic activity.
Of course, one can posit that a tax system can become too progressive. Extremely high
tax rates at high-income levels might create a significant disincentive that reduces the
productive capacity of society. Very high taxes might limit the risks taken by
entrepreneurs, stifling innovations and technological advances. The desire to “soak the
rich” through an extremely progressive tax system might be viewed as unfair, and not just
by the rich. In fact, this was a concern of the Constitutional framers – that a democratic
majority would eventually impose unduly burdensome taxes on the wealthy minority.
We’ll see that their concerns have proved groundless. Many critics of the current tax
2
The distribution of the benefits derived from public expenditures is, of course, more difficult to determine
that the distribution of tax payments. The distribution of public assistance programs can be easily
measured. However, the distribution of the benefits of scientific research support, business subsidies,
public works, national defense, and other expenditures is a difficult research task.
3
Money saved or invested may, however, provide the financial capital necessary to increase the productive
capacity of the economy. “Supply-side” economists stress the importance of investment by the wealthy as
the key to economic growth.
3
system point to the contrary position – that the powerful minority have used their might
to shift the tax burden away from themselves onto an immobilized and misinformed
majority.
Even if one could devise a tax system that is economically optimal (i.e., producing the
highest overall level of economic growth), the topic of taxation encompasses ideals about
equity and fairness. A society may be willing to sacrifice some degree of economic
growth in exchange for a more equitable distribution of economic resources. This is not
to say that economic growth must always be sacrificed with redistribution. In fact,
analysis of the U.S. historical data finds that high levels of economic growth tend to be
associated with periods of relatively equitable distribution of economic resources
(Krugman, 2002).
We now turn to differentiating between the different types of taxes levied in the U.S.
We’ll first discuss several forms of federal taxation, roughly in order of the revenue they
generate, and then consider taxation at the state and local levels. A final section will
consider taxes that are generally not used in the U.S. but are important in other nations.
Federal Income Taxes
The federal income tax is the most visible, complicated, and debated tax in the U.S. The
federal income tax was established with the ratification of the 16th Amendment to the
U.S. Constitution in 1913. It is levied on wages and salaries as well as income from
many other sources including interest, dividends, capital gains, self-employment income,
alimony, and prizes. To understand the basic workings of federal income taxes, you need
to comprehend only two major issues. First, all income is not taxable – there are
important differences between “total income,” “adjusted gross income,” and “taxable
income.” Second, you need to know the distinction between a person’s “effective tax
rate” and “marginal tax rate.”
Total income is simply the sum of income an individual or couple 4 receives from all
sources. For most people, the largest portion of total income comes from wages or
salaries. Many people also receive investment income from the three standard sources:
interest, capital gains, and dividends. Self-employment income is also included in total
income, along with other types of income such as alimony, farm income, and gambling
winnings.
The amount of federal taxes a person owes is not calculated based on total income.
Instead, once total income is calculated, tax filers are allowed to subtract some expenses
as non-taxable. To obtain adjusted gross income (AGI), certain out-of-pocket expenses
made by a tax filer are subtracted from total income. These expenses include individual
retirement account contributions, allowable moving expenses, student loan interest,
tuition, and a few other expenses. AGI is important because much of the tax data
presented by the IRS are sorted by AGI.
4
Married couples have the option of filing their federal taxes either jointly or separately. Children aged 14
or over with sufficient income ($7,700 in 2002) have to file their own federal income tax returns.
4
However, taxes are not calculated based on AGI either. Taxable income is basically
AGI less deductions and exemptions. Deductions are either standard or itemized. The
standard deduction is a fixed amount excluded from taxation – for the 2009 tax year the
standard deduction was $5,700 for single individuals and $11,400 for married couples.
Tax filers have the option of itemizing their deductions. To itemize, a tax filer adds up
certain expenses made during the year including state taxes, real estate taxes, mortgage
interest, gifts to charity, and major medical expenses. 5 If the itemized deductions
exceed the standard deduction, then the itemized total is deducted instead. Exemptions
are calculated based on the number of tax filers and dependents. A single tax filer with
no dependent children can claim one exemption. A married couple with no children can
claim two exemptions. Each dependent child counts as one more exemption. Additional
exemptions are given for being age 65 or over or blind. In 2009, each exemption
excluded a further $3,650 from taxation. 6
Taxable income is obtained by subtracting the deduction and exemption amounts from
AGI. This is the amount a taxpayer actually pays taxes on. However, the amount of tax
owed is not simply a multiple of taxable income and a single tax rate. The federal
income tax system in the U.S. uses increasing marginal tax rates. This means that
different tax rates apply on different portions of a person’s income. The concept is best
illustrated with an example using the 2009 tax rates. For a single filer, the first $8,350 of
taxable income (not total income or AGI) is taxed at a rate of 10%. Taxable income
above $8,350 but less than $33,950 is taxed at a rate of 15%. Taxable income above
$33,950 but less than $82,250 is taxed at a rate of 25%. Income above $82,250 is taxed
at higher marginal rates – 28%, 33%, and 35%.
Consider how we would calculate the taxes due for a single tax filer (let’s call her Susan)
with no children and a total income of $35,000. Assume Susan contributed $3,000 to an
individual retirement account and that this is her only allowable adjustment expense.
Thus, her AGI is $32,000. She claims one exemption (herself) in the amount of $3,650
and the standard deduction of $5,700. Thus, Susan’s taxable income is $22,650. On the
first $8,350 of taxable income she owes 10% in taxes, or $835. The tax rate on the rest of
her income is 15% for a tax of $2,145, (($22,650 - $8,350) × 0.15). So, her total federal
income tax bill is $2,980, ($835 + $2,145). Note that Susan’s taxable income is $12,350
less than her total income.
While Susan paid a maximum tax rate of 15%, we can see that her effective tax rate is
much lower. An effective tax rate can be calculated based on total income, AGI, or
taxable income. Suppose we wish to calculate Susan’s effective tax rate based on her
total income of $35,000. Given that her federal income tax is $2,980, her effective tax
rate is only 8.5%, (($2,980/$35,000) × 100). If we based her effective tax rate on her
AGI, it would be 9.3%, (($2,980/$32,000) × 100).
5
Note that some expenses, such as moving costs, are subtracted from total income to obtain AGI while
other expenses, such as mortgage interest, are classified as deductions from AGI to obtain taxable income.
6
Those with high incomes (more than $125,100 for an individual) either have their exemption allowance
either reduced or eliminated.
5
Social Insurance Taxes
Taxes for federal social insurance programs, including Social Security, Medicaid, and
Medicare, are taxed separately from income. Social insurance taxes are levied on
salaries and wages, as well as income from self-employment. For those employed by
others, these taxes are generally deducted directly from their paycheck. These deductions
commonly appear as “FICA” taxes – a reference to the Federal Insurance Contributions
Act. Self-employed individuals must pay their social insurance taxes when they file their
federal income tax returns.
Social insurance taxes are actually two separate taxes. The first is a tax of 12.4% of
wages, which is primarily used to fund Social Security. Half of this tax is deducted from
an employee’s paycheck while the employer is responsible for matching this contribution.
The other is a tax of 2.9% for the Medicare program. Again, the employee and employer
each pay half. Thus, social insurance taxes normally amount to a 7.65% deduction from
an employee’s wage (6.2% + 1.45%). Self-employed individuals are responsible for
paying the entire share, 15.3%, themselves.
There is a very important difference between these two taxes. The Social Security tax is
due only on the first $106,800 (in 2009) of income. On income above $106,800, no
additional Social Security tax is paid. In other words, the maximum Social Security tax
in 2009 that would be deducted from total wages is $6,622 ($106,800 × 0.062). The
Medicare tax, however, is paid on all wages. Thus, the Medicare tax is truly a flat tax
while the Social Security tax is a flat tax on the first $106,800 of income but then
becomes a regressive tax when we consider income above this limit.
Consider the impact of social insurance taxes on two individuals, one making a typical
salary of $45,000 and another making $300,000. The typical worker would pay 7.65%
on all income, or $3,443, in federal social insurance taxes. The high-income worker
would pay the maximum Social Security contribution of $6,622 plus $4,350 for Medicare
(1.45% of $300,000) for a total bill of $10,972. This works out to a 3.7% overall tax rate,
or less than half the tax rate paid by the typical worker. As the high-income individual
pays a lower rate of taxation, we see that social insurance taxes are regressive.
Federal Corporate Taxes
Corporations must file federal tax forms that are in many ways similar to the forms
individuals complete. Corporate taxable income is defined as total revenues minus the
cost of goods sold, wages and salaries, depreciation, repairs, interest paid, and other
deductions. Thus corporations, like individuals, can take advantage of many deductions
to reduce their taxable income. In fact, a corporation may have so many deductions that
it actually ends up paying no tax at all or even receives a rebate check from the federal
government. We’ll discuss this issue further later in the module.
Corporate tax rates, like personal income tax rates, are progressive and calculated on a
marginal basis. In 2009, the lowest corporate tax rate, applied to profits lower than
6
$50,000 was 15%. The highest marginal corporate tax rate, applied to profits between
$100,000 and $335,000 was 39%. 7 As with individuals, the effective tax rate
corporations pay is lower than their marginal tax rate.
Federal Excise Taxes
An excise tax is a tax on the production, sale, or use of a particular commodity. The
federal government collects excise taxes from manufacturers and retailers for the
production or sale of a surprising number of products including tires, telephone services,
air travel, transportation fuels, alcohol, tobacco, and firearms.
Unlike a sales tax, which is evident as an addition to the selling price of a product, excise
taxes are normally incorporated into the price of a product. In most cases, consumers are
not directly aware of the federal excise taxes they pay. However, every time you buy
gas, make a phone call, fly in a commercial plane, or buy tobacco products, you are
paying a federal excise tax. For example, the federal excise tax on gasoline as of 2009
was about 18 cents per gallon.
Federal excise taxes are another example of a regressive tax. Lower-income households
tend to spend a greater portion of their income on goods that are subject to federal excise
taxes. This is particularly true for gasoline, tobacco, and alcohol products.
Federal Estate and Gift Taxes
The vast majority of Americans will never be affected by the federal estate or gift taxes.
These taxes apply only to the wealthiest Americans. The estate tax is applied to transfers
of large estates to beneficiaries. Similar to the federal income tax, there is an exemption
amount that is not taxed. Only estates valued above the exemption amount are subject to
the estate tax, and the tax only applies to the value of the estate above the exemption. For
example, if the tax rate were 45% of the exemption amount was $2 million, then the tax
on an estate valued at $3.5 million would be $675,000, ((3,500,000-2,000,000)*0.45).
As of Fall 2010, the future of the estate tax is in limbo. Under the Economic Growth and
Tax Relief Act of 2001, estate taxes rates were gradually reduced, and exemption rates
gradually increased, over the period 2001-2009. In 2001, the exemption amount was
$675,000 million and the tax rate was 55%. For the 2009 tax year, the exemption amount
was $3.5 million and the tax rate was 45%. But for 2010, there is no estate tax at all!
Then, in 2011, the tax is scheduled to be reinstated with an exemption of $1 million and a
tax rate of 55%. The ongoing debate over the estate tax will be covered in more detail
later in this module.
The transfer of large gifts is also subject to federal taxation. The estate tax and gift tax
are complementary because the gift tax essentially prevents people from giving away
their estate to beneficiaries tax-free while they are still alive. In 2009, gifts under
7
For the highest profit bracket – profits above $18,333,333 – the marginal rate was 35%.
7
$13,000 were excluded from the tax. Similar to the federal income tax, the gift tax rates
are marginal and progressive, with a maximum tax rate of 45%.
The estate and gift taxes are the most progressive element of federal taxation. The estate
tax is paid exclusively by those with considerable assets. Even further, the majority of all
estate taxes are paid by a very small number of wealthy taxpayers. According to the Tax
Policy Center, in 2009 the richest 0.1% of those subject to the estate tax pay 42% of the
total estate tax revenue. (Tax Policy Center, 2010).
State and Local Taxes
Like the federal government, state governments also rely on tax revenues to fund public
expenditures and transfer programs. Like the federal government, state governments rely
on several different tax mechanisms including income taxes, excise taxes, and corporate
taxes. Thus, much of the above discussion applies to the tax structures in place in most
states. However, there are some important differences that deserve mention.
First, nearly all states (45 as of 2010) have instituted some type of general sales tax.
State sales tax rates range from 2.9% (Colorado) to 8.25% (California 8 ). A few states
reduce the tax rate on certain goods considered to be necessities, such as food and
prescription drugs. For example, the general sales tax in Illinois is 6.25% but most food
and drug sales are taxed at only 1%. Other states with sales taxes exempt some
necessities from taxation entirely. In most states, localities can charge a separate sales
tax. While local sales taxes are generally lower than state sales taxes, there are
exceptions. In New York the state sales tax is 4% but local sales taxes are often higher
than 4%.
Unlike income taxes, sales taxes tend to be quite regressive. The reason is that lowincome households tend to spend a larger share of their income on taxable items than
high-income households. Consider gasoline – an item that tends to be a smaller share of
total expenditures as income rises. An increase in the state taxes on gasoline impacts
low-income households more than high-income households. Some states, such as Idaho
and Kansas, offer low-income households a tax credit to compensate for the regressive
nature of state sales taxes.
Forty-one states levy an income tax. 9 Most of these states have several progressive tax
brackets (up to 12 rates) similar to the federal income tax. However, state income taxes
tend to be much less progressive than the federal income tax. Six states have only one
income tax rate, meaning that their income tax approaches a flat tax. Several more states
approach a flat tax because the top rate applies at a low income or the rates are relatively
constant. For example, Maine’s two tax rates are 6.50% and 6.85%.
8
Local sales taxes are also levied in some municipalities in California, which can raise the total sales tax to
as high as 10.75%.
9
Two other states, Tennessee and New Hampshire, levy no state income tax but do tax dividends and
interest.
8
Another important distinction between the federal system of taxation and the taxes levied
at state and local levels is use of property taxes. In fact, property taxes tend to be the
largest revenue source for state and local governments. The primary property tax levied
in the U.S. is a tax on real estate, including land, private residences, and commercial
properties. Generally, the tax is an annual assessment calculated as a proportion of the
value of the property, although the formulas used by localities differ significantly.
Property taxes are commonly collected at a local level, but a share of property taxes is
allocated for state purposes. Property taxes tend to be regressive, although less regressive
than excise and sales taxes. The reason is that high-income households tend to have a
lower proportion of their assets subjected to property taxes. While renters do not directly
pay property taxes, most economists conclude that the costs of property taxes are largely
passed on to renters in the form of higher rents.
Composition of Tax Collections in the U.S.
Table 1 presents government tax receipts, by tax source, for 2008 (the most recent year
for which complete data were available). The table shows that federal taxes dominate the
nation’s tax system with nearly 65% of all receipts. The largest federal tax is the income
tax, followed closely by social insurance taxes. State and local tax systems are primarily
dependent on sales, income, and property taxation. The data in Table 1 cover the major
taxes utilized in the United States. To gain a broader perspective on taxation, see Box 1
for a summary of tax mechanisms that are major revenue sources for some countries but
are currently non-existent or insignificant in the U.S.
Table 1. 2008 U.S. Tax Receipts, by Source
Source
Federal Taxes
Income Taxes
Social Insurance Taxes
Corporate Taxes
Excise Taxes
Estate Taxes
Total, Federal Taxes
State Taxes
Sales Taxes
Property Taxes
Income Taxes
Corporate Taxes
Excise and Other Taxes
Total, State Taxes
Total, All Taxes
Amount (Millions $)
Percent of All Taxes
1,145,700
900,200
304,300
67,300
23,000
2,440,500
30.4%
23.9%
8.1%
1.8%
0.6%
64.7%
304,400
409,700
304,600
57,800
253,900
1,330,400
3,770,900
8.1%
10.9%
8.1%
1.5%
6.7%
35.3%
100.0%
Source: U.S. Census Bureau (2010), except for federal estate tax data from Tax Policy Center
(2008).
9
BOX 1. TAX ALTERNATIVES
It is worthwhile to briefly consider tax types that are not currently important in the U.S.
because these mechanisms are used in other countries or are central in various proposals
to reform the U.S. tax system. We summarize five tax types here:
1. National sales tax. This would function similar to a state sales tax – as an
addition to the retail price of certain products. A national sales tax would clearly
be simpler and cheaper to administer than the current federal income tax. It would
also encourage savings because, under most proposals, income that is not spent on
taxable goods and services is not taxed. There are, however, two significant
disadvantages to a national sales tax. First, it would create an incentive for black
market exchanges to evade the tax. Second, it can be highly regressive – similar
to the regressivity of state sales taxes. A national sales tax could be made less
regressive, or even progressive, by providing rebates for low-income households.
2. National consumption tax. This is slightly different from a national sales tax. A
household would pay the tax at the end of the year based on the value of its annual
consumption of goods and services. Consumption can be calculated as total
income less money not spent on goods and services (i.e., invested or saved).
Again, a consumption tax would promote savings by exempting it from taxation.
A consumption tax could also be designed to be progressive by taxing different
levels of consumption at different marginal rates.
3. Value added tax. Most developed countries levy some form of value added tax
(VAT). A VAT is levied at each stage in the production process of a product,
collected from manufacturers according to the value added at each stage. Thus,
the tax is not added to the retail price but incorporated into prices, similar to the
way excise taxes become embedded into the price of products. Compared to a
national sales tax, a VAT reduces the likelihood of black markets.
4. Wealth taxes. While the U.S. tax system includes local property taxes and, at
least for a while, estate taxes, there is no tax on holdings of other assets such as
corporate stocks, bonds, and personal property. Several European countries,
including Sweden, Spain, and Switzerland, have instituted an annual wealth tax.
A wealth tax could be very progressive by setting high rates and becoming
effective only at significant wealth levels.
5. Environmental taxes. These are levied on goods and services in proportion to
their environmental impact. One example is a carbon tax, which taxes products
based on the emissions of carbon attributable to their production or consumption.
The rationale of environmental taxation is that it encourages the use and
development of goods and services with reduced environmental impacts. Like
other taxes on goods and services, environmental taxes can be regressive –
suggesting that environmental taxes need to be combined with other progressive
taxes or rebates for low-income households. Among developed countries, the U.S.
collects the smallest share of tax revenues from environmental taxes (OECD,
2010).
10
III. A BRIEF HISTORY OF TAXATION IN THE U.S. 10
Before the Federal Income Tax
The tax mechanisms used during first 150 years or so of U.S. tax history bears little
resemblance to the current system of taxation. First, the U.S. Constitution restricted
“direct” taxation by the federal government – meaning taxes directly on individuals.
Instead, the federal government relied on indirect taxes including taxes on imports
(tariffs) and excise taxes. Tariffs were the major source of U.S. government receipts
from the beginning of the nation up to the early 1900’s. For example, in 1800 custom
duties comprised about 84% of government receipts (U.S. Census Bureau, 1960).
Internal federal revenue collections (which exclude tariffs on imports) as recently as the
early 20th century were primarily derived from excise taxes on alcohol. In 1900 over
60% of internal revenue collections came from alcohol excise taxes with another 20%
from tobacco excise taxes.
Another important difference is the scale of government taxation and expenditures
relative to the entire economy. Government spending is currently a major portion of the
total U.S. economy – in 2010 government expenditures and investment at all levels
comprised about 20% of total economic output. In the late 1800s government
expenditures were responsible for only about 2% of national output (earlier data on
national output are not available). The role of government has become more prominent
as a result of expansion of military activity and an increase in the provision of public
services. Consequently an overall trend of increasing taxation is evident, although we’ll
see that this trend has recently stabilized or reversed.
The Constitutional framers were wary of a government’s power to tax. Taxation of the
American Colonies by a distant and corrupt England was a driving force behind the
American Revolution. Consequently, they believed in decentralized taxation and
delegated most public revenue collection to localities, which relied primarily on property
taxes. During peacetime the federal government was able to meet its expenses through
relatively modest excise taxes and tariffs. During times of war, such as the War of 1812,
federal taxes were temporarily raised to finance the war or pay down the ensuing debts.
Once the financial crisis passed, taxes were reduced in response to public opposition to
high tax rates.
Like previous wars, the Civil War initiated an increase in both excise tax and tariff rates.
Government revenue collections increased by a factor of seven between 1863 and 1866.
Perhaps the most significant tax policy enacted during the Civil War was the institution
of the first national income tax. Concerns about the legality of the tax, considering the
Constitution’s prohibition of direct taxation, were muted during the national emergency.
The income tax rates were low by modern standards – a maximum rate of 10% along
with generous exemptions meant that only about 10% of households were subject to any
income tax. Still, the income tax generated over 20% of federal revenues in 1865. After
10
The history of taxation is primarily derived from Brownlee (1996).
11
the war, few politicians favored the continuation of the income tax, and in 1872 it was
allowed to expire.
The impetus for the modern federal income tax rests not with a wartime emergency but
with the Populist movement of the late 1800s. The internal tax system in place at the
time, based primarily on excise taxes on alcohol and tobacco, was largely regressive.
The Populists revived interest in an income tax as a means to introduce a progressive tax
based on ability to pay. They saw it as a response to excessive monopoly profits and the
concentration of wealth and power. In other words, the tax was not envisioned as a
means to generate significant additional public revenue but as a vehicle of social justice.
A federal income tax, with a large exemption of $4,000, was instituted in 1894 but the
Supreme Court ruled it unconstitutional in 1895. Over the next couple of decades
proposals were made for a constitutional amendment to establish a federal income tax.
While these attempts were defeated, support for federal income taxation gradually
increased. Eventually, in 1913 the 16th Amendment was ratified creating the legal basis
for the federal income tax.
While the initial income tax was progressive, it was less radical than many desired. In
fact, many conservatives expressed guarded support for the measure to prevent a more
significant tax. While the income tax was targeted towards the wealthy – in the first few
years only about 2% of households paid any income tax – tax rates of only 1%-7%
prevented it from generating significant revenues.
“...virtually none of the income tax proponents within the government believed
that the income tax would become a major, yet alone the dominant, permanent
source of revenue within the consumption-based federal tax system.” (Brownlee,
1996, p. 45)
These views were to quickly change as the nation required a dramatic increase in
revenues to finance World War I.
The Growth of Direct Taxation
Rather than relying on increases in excise taxes and tariffs to finance World War I, the
administration of Woodrow Wilson transformed the income tax framework laid down
just a few years previously. Desiring both to raise additional revenue and enforce social
justice, the top marginal rate increased dramatically from 7% in 1915 to 67% in 1917
(IRS, 2002). Corporate taxes also became an important revenue source, accounting for
over one-quarter of internal revenue collections in 1917. In 1916 the estate tax was
created, not necessarily to generate large revenues but as another instrument of
progressive taxation.
Unlike previous wars, much of the tax system laid down during World War I remained in
place after the war. In the period from 1910 to 1925 tariffs fell from about half of
government receipts to less than 15%. Meanwhile the new corporate and individual
12
income taxes made up nearly half of government receipts in the mid 1920s. The level of
excise tax collections dropped significantly, especially during the years of Prohibition
when alcohol excise taxes virtually disappeared.
The Great Depression, of course, caused a significant decline in federal receipts. In 1932
tax rates were increased in an attempt to boost federal revenue. Franklin Roosevelt, in
the years leading up to World War II, presented progressive taxation as a key element of
the New Deal. However, the most significant measure enacted during this period was the
creation of old-age insurance.
Prior to national social insurance programs, poverty was the common state of the elderly
(Skidmore, 1999). By the 1930s, several European countries had already instituted
programs of social insurance. Germany was the first to establish old-age and survivors
pensions in 1889 (Peterson, 1999). The Great Depression finally motivated policy
makers in the U.S. to enact similar legislation. Rather than funding Social Security
programs through increases in income, or other, taxes, the funding mechanism was a
separate tax, split equally between employers and employees. All employees covered by
the system 11 contributed and received benefits regardless of their income. This design
was intended to protect the system from political attack. As everyone who pays into the
system receives benefits, Social Security is not considered “welfare” that is allocated to
only a segment of the population. Also, because Social Security is a separate tax,
contributors view their old-age payments as entitlements and oppose attempts to weaken
the program. This design has so far proved very successful – Social Security is often
called the “third rail” of American politics (i.e., touch it and you die).
World War II created yet another emergency situation requiring additional revenues.
Similar to Woodrow Wilson during World War I, President Franklin Roosevelt sought to
raise revenues primarily from higher taxes on corporations and high-income households.
Roosevelt went so far as to state that:
“In this time of grave national danger, when all excess income should go to win
the war, no American citizen ought to have a net income, after he has paid his
taxes, of more than $25,000.” (Brownlee, 1996, p. 91)
Roosevelt was unable to obtain enough Congressional support to enact his most
progressive proposals. The ensuing compromise did produce a more progressive federal
income tax but it also became levied on more households. Personal exemptions were
reduced by half between 1939 and 1942 – meaning the income tax reached well into the
middle class for the first time. The taxable income subject to the highest marginal rate
dropped from $5 million in 1941 down to $200,000 in 1942. Also, the top marginal tax
rate reached a record high of 94% in 1944. Another change during World War II was
withholding federal taxes from an employee’s paycheck rather than requiring payment of
11
While Social Security has expanded over the years to cover more employees, all workers are not
currently covered by the system. For example, about one-quarter of state and local government employees
are not included in the system (Peterson, 1999).
13
taxes due at the end of the year. These, as well as other, changes produced a dramatic
shift in the structure of federal taxation:
“Under the new tax system, the number of individual taxpayers grew from 3.9
million in 1939 to 42.6 million in 1945, and federal income tax collections over
the period leaped from $2.2 billion to $35.1 billion. By the end of the war nearly
90 percent of the members of the labor force submitted income-tax returns, and
about 60 percent of the labor force paid income taxes. … At the same time, the
federal government came to dominate the nation’s revenue system. In 1940,
federal income tax had accounted for only 16 percent of the taxes collected by all
levels of government; by 1950 the federal income tax produced more than 51
percent of all collections. Installation of the new regime was the most dramatic
shift in the nation’s tax policies since 1916.” (Brownlee, 1996, p. 96-97)
As in the period after World War I, much of the new tax structure instituted during World
War II remained in place after the war. Both major political parties expressed support for
a progressive but broad income tax, relatively flat tax rates on corporate profits, and
social insurance taxes that were basically regressive. Public support for the existing tax
system was boosted by patriotic feelings and broad-based economic growth after the war.
Changes to the tax system between the end of World War II and the 1980’s were
generally minor. The Social Security tax occasionally increased as more people were
receiving benefits. The initial tax rate of 2% (1% each for employers and employees) had
increased to 6.13% by 1979. The Medicare and Medicaid programs were established in
the 1960s. Across-the-board tax cuts in 1964 reduced marginal rates for both low- and
high-income households (the top marginal rate fell from 91% in 1963 to 70% in 1965).
Still, government continued to become a more significant portion of the entire economy
in the decades after World War II. Total government expenditure and investment
increased gradually from less than 18% of GDP in 1946 to over 22% by the mid 1970s.
From the “Reagan Revolution” to the Bush Tax Cuts
The general stasis of the federal tax system ended in the 1980s with the passage of
several important tax reforms. Ronald Reagan was elected president in 1980 on a
platform of smaller government and lower taxes. The Economic Recovery Tax Act of
1981 (ERTA) enacted the largest tax cut in American history 12 and inspired tax cutting
by many other nations in the 1980s. The supply-side rationale behind ERTA’s sharp
reduction in tax rates, particularly on high-income households and capital, was that
greater incentives would motivate increased investment and economic activity. The
ensuing economic growth and consequent tax revenue growth would, in theory, more
than offset the revenue reductions as a result of the tax cuts. Thus, the theory was that tax
cuts could actually produce an increase in federal revenues and address the growing
federal budget deficit as well. ERTA phased in a reduction in the top tax rate from 70%
to 50%, enacted several corporate tax cuts, and indexed many tax parameters to inflation
(such as personal exemptions and deductions).
12
When measured in constant dollars (adjusted for inflation).
14
Analysis suggests that, in reality, ERTA resulted in the largest reduction in federal
revenues of any tax bill since World War II (Tempalski, 1998). The federal budget
deficit continued to grow. The very next year, in 1982, the largest peacetime tax increase
was passed (Martin, 1991). The act repealed some of the more revenue-reducing
provisions of ERTA, such as accelerated depreciation reductions for corporations, and
closed several corporate loopholes in the tax code. Social Security reforms were enacted
in 1983 that increased Social Security tax rates and initiated taxation of some benefits.
Reagan continued to push for further tax reforms, leading to the Tax Reform Act of 1986
– considered to be the most comprehensive revision of the tax code since the 1950s
(Petska and Strudler, 1999). This act reduced top income tax rates even further – from
50% in 1986 to 28% in 1988. Among many other changes, it also lowered the top
corporate tax rate from 46% to 34%.
Clearly, the “Reagan revolution” is an important era in U.S. tax history, but many people
misinterpret it as a period where the size of the federal government was drastically
reduced and taxes cut significantly. Despite the two major tax cuts during Reagan’s
terms, federal revenue collections increased at nearly the same pace as national output
(total federal revenues increased about 76% from 1980-1988 while GDP increased 83%).
The actual changes were more evident in the distribution of federal revenues than their
total level. The share of revenues from both individual and corporate taxation fell (by 9%
and 16% respectively) while the portion from social insurance taxes increased by 38%.
As the individual and corporate taxes are progressive, while social insurance taxes are
regressive, the outcome was a decrease in the overall progressivity of the federal tax
system. Specific changes within the individual income tax code exacerbated the decline
in progressivity.
The Reagan era failed to control the growing federal deficit. The annual budget deficits
of the federal government tripled during the 1980s 13 (OMB, 2003). Partly to raise
additional revenue to try to reduce deficits, the first President Bush reneged on his
campaign promise of “no new taxes” and agreed to a compromise tax proposal in 1990
that raised the top marginal tax bracket to 31%. President Clinton reinstated additional
progressivity in 1993 by creating the 36% and 39.6% individual tax brackets. In 1993,
the corporate tax rate was increased slightly to 35%. These changes produced an increase
in the progressivity of federal taxes.
The most recent important tax legislation was the $1.35 trillion Bush tax cut passed in
2001. The major provisions of this act include lowering individual income tax rates
across-the-board, scheduling repeal of the estate tax in 2010, and increasing the amount
employees can contribute under various programs for retirement purposes. Many of the
bill’s provisions are “back-loaded,” meaning the tax reductions are phased in over time
with most of the tax reduction occurring in the future. For example, the top marginal
bracket fell from 39.6% in 2001 to 38.6% in 2002 but eventually fell to 35.0% in 2006.
13
This is based on the “on-budget” calculations. The on-budget accounting excludes the Social Security
trust fund as well as other minor balances.
15
The Bush tax cut reduced the overall progressiveness of the federal income tax as highincome taxpayers received a disproportionate share of the total cuts (CTJ, 2001).
A somewhat smaller tax cut was passed in 2003 that, among other changes, accelerated
scheduled tax rate decreases and lowered the maximum tax rate on capital gains and
dividends. Most recently, the 2009 American Recovery and Reinvestment Act of 2009
instituted or expanded various tax credits such as a payroll tax credit of $400 per worker
and an expanded tax credit for college tuition.
IV. Summary Data of U.S. Tax History
Until quite recently, tax collections have tended to increase over time; paralleling the
increase in the size of the federal government. We see in Figure 1 that federal tax
revenues have grown considerably during the 20th century, even after adjusting for
inflation. A large increase in federal tax collections occurred during World War II, with
relatively consistent growth after about 1960. However, notice occasional declines in
federal tax revenues, due either to recessions or to major tax code changes. The growth
Figure 1. Tax Collections, 1913-2009 (All values in 2009 dollars) 14
14
Data on state and local taxes are incomplete and/or inconsistent prior to 1932. All data from various
editions of the Statistical Abstract of the United States and U.S. Census Bureau (1960).
16
of state and local tax collections, by comparison, has been steadier with less fluctuation.
The reason is that state and local tax revenues are derived primarily from property and
sales taxes, which vary less than income (particularly corporate income) during business
cycles.
Another way to illustrate the growth of federal taxation is to measure it relative to
national economic output. In Figure 2 we plot federal and state and local tax collections
as a share of GDP. Three facts are evident from Figure 2. First, total tax collections have
generally grown as a percentage of GDP over the 20th century. Again, the largest leap
occurred during World War II, but some additional growth is evident after the war as
well. The second fact is that federal tax revenues now substantially exceed state and
local tax revenues. While World War II solidified the federal government as the primary
tax collector in the U.S., note that this trend began prior to the war. Finally, note the
decline in federal taxes as a percentage of GDP since 2000. This is a result of both
economic recessions and declines in federal tax rates. In fact, federal taxes as a
percentage of GDP were lower in 2009 than in any year since the 1940s.
Figure 2. Tax Collections as a Percentage of GDP, 1913-2009 15
As federal revenues grew during the 20th century, the composition of taxation has
changed considerably. We see in Figure 3 that at the beginning of the century federal
taxation was dominated by excise taxes. Except for a revival of excise taxes during the
Depression Era, their importance has generally diminished over time. Corporate taxes
became the most significant source of federal revenues for the period 1918-1932. After a
period of higher corporate taxes during World War II, corporate taxes have generally
diminished in significance relative to other forms of federal taxation. Personal income
15
Data on state and local taxes are incomplete and/or inconsistent prior to 1932.
17
taxes became the largest source of federal revenues in 1944 and have remained so. Since
World War II, income taxes have consistently supplied between 40-50% of federal
revenues. Since about 1950, social insurance taxes have increased their share of federal
revenues from about 10% up to nearly 40%. In fact, social insurance taxes may soon
exceed personal income taxes as the largest source of federal revenues.
Figure 3. Composition of Federal Taxes, 1913-2009
The composition of state and local taxes, with its increased reliance on sales and property
taxes, differs from the composition of federal taxes. Of course, each state has a different
tax system – some states have no income and/or sales taxes, and tax rates can differ
significantly across states. In this module, we combine tax data for all states rather than
presenting a state-by-state analysis. Figure 4 presents the composition of state and local
taxes over the period 1945-2009. The two major trends that are evident are a decline in
the importance of property taxes and an increase in the importance of personal income
taxes except for a recent reversal of these trends in the last few years. While property
taxes were the primary source of state and local revenues until the 1970s, sales taxes
became the major source of revenues until 2008, when property taxes again became the
major revenue source.
18
Figure 4. Composition of State and Local Taxation, 1945-2009
V. THE DISTRIBUTION OF TAXES IN THE UNITED STATES
Tax Incidence Analysis
There are basically two ways to analyze how the tax burden is distributed. The easiest
way is to measure the taxes directly paid by entities, such as households or businesses,
classified according to criteria such as household income, business profit levels, etc.
These data can be obtained directly from aggregate tax return data published by the IRS
and from reports from other government agencies. This approach considers only who
actually pays the tax to the government. Thus, it would allocate corporate taxes to
corporations, excise taxes to manufacturers, sales taxes to consumers, etc.
The second approach, called tax incidence analysis, is more complex yet more
meaningful. While taxes are paid by various entities other than individuals, such as
corporations, partnerships, and public service organizations, the burden of all taxes
ultimately fall on people. The final incidence of taxation is contingent upon how a
specific tax translates into changes in prices and changes in economic behavior among
consumers and businesses:
“Tax incidence is the study of who bears the economic burden of a tax. More
generally, it is the positive analysis of the impact of taxes on the distribution of
welfare within a society. It begins with the very basic insight that the person who
19
has the legal obligation to make a tax payment may not be the person whose
welfare is reduced by the existence of the tax. The statutory incidence of a tax
refers to the distribution of those legal tax payments – based on the statutory
obligation to remit taxes to the government. ...
Economic incidence differs from statutory incidence because of changes in
behavior and consequent changes in equilibrium prices. Consumers buy less of a
taxed product, so firms produce less and buy fewer inputs – which changes the net
price or return to each input. Thus the job of the incidence analyst is to determine
how those other prices change, and how those price changes affect different
groups of individuals.” (Metcalf and Fullerton, 2002, p. 1)
Tax incidence analysis has produced a number of generally accepted conclusions
regarding the burden of different tax mechanisms. Remember, for example, that the
payroll tax on paper is split equally between employer and employee:
“So, who really pays the payroll tax? Is the payroll tax reflected in reduced
profits for the employer or in reduced wages for the worker? ... there is generally
universal agreement that the real burden of the tax falls almost entirely on the
worker. Basically, an employer will only hire a worker if the cost to the employer
of hiring that worker is no more than the value that worker can add. So, a worker
is paid roughly what he or she adds to the value of production, minus the payroll
tax; in effect, the whole tax is deducted from wages. ... to repeat, this is not a
controversial view; it is the view of the vast majority of analysts...” (Krugman,
2001, p. 43)
The most common assumption made regarding the allocation of corporate taxes is that
the burden of these taxes falls almost exclusively on the owners of capital investments.
Given the mobility of capital, the burden is not limited to owners of corporate capital but
extends to owners of all capital. 16 This result is primarily a theoretical finding – in reality
some portion of the corporate tax burden likely falls on workers (through lower wages)
and consumers (through higher prices).
Excise taxes, although directly paid by manufacturers, are generally attributed entirely to
consumers according to their consumption patterns. 17 This result is based on an
assumption of perfect competition in the affected industries. Real-world markets,
however, are not perfectly competitive. The actual incidence of excise taxes will depend
on the degree of competition in an industry. For example, imperfectly competitive
industries with upward-sloping supply curves imply that prices increase by less than the
tax and that a portion of excise taxes is borne by businesses. 18
16
See summary in Metcalf and Fullerton (2002).
See CBO (2008).
18
See Fullerton and Metcalf (2002) for a summary of incidence assumptions and analyses for different
types of taxes.
17
20
The burden of sales taxes is generally assumed to fall directly on consumers who buy the
taxed goods and services. Again, this is a simplifying assumption – in reality some
portion of sales taxes filters to corporate owners, other capital owners, and workers.
Personal income taxes paid by households are directly attributed to those households
paying the tax. Estate tax burdens fall on the heirs paying the tax. Finally, property tax
burdens are generally assumed to fall on property owners although the burden can be
passed on renters (some analysts attribute property taxes more broadly to owners of
capital).
So, for several types of tax mechanisms (personal income, sales, excise, and estate taxes),
data on direct tax payments is analogous to tax incidence. However, for other taxes
(payroll, corporate, and to a lesser extent property taxes) the direct data on tax payments
will differ from the ultimate burden of the tax.
Using Effective Tax Rate Data to Determine Tax Progressivity
As mentioned before, a tax is progressive if the percentage of income a person pays for
the tax increases as income increases. Thus, we can determine whether a tax is
progressive or regressive by looking at a table showing the effective tax rates for a
particular tax for people in different income categories. If effective tax rates increase
(decrease) with increasing income, then the tax is progressive (regressive). Table 2
shows the percentage of income people in each adjusted gross income (AGI) category
paid in federal income taxes in 2008, the most recent data available. We see that
effective tax rates for the federal income tax tend to increase with increasing income
(although not always). For taxpayers making less than $100,000 AGI per year, the
Table 2. Distribution of Federal Income Taxes, 2008
AGI Category
Percent of
Returns
16.7
Average
AGI
$5,099
Average
Income Taxes
$177
Effective Income
Tax Rate
3.5%
16.0
$14,927
$513
3.4%
13.0
$24,798
$1,421
5.7%
18.0
$39,126
$2,808
7.2%
13.5
$61,470
$5,246
8.5%
8.2
9.7
2.4
0.4
$86,421
$133,208
$285,735
$679,576
$8,037
$16,903
$55,984
$163,513
9.3%
12.7%
19.6%
24.1%
0.2
$3,349,101
$780,550
23.3%
$1-$10,000
$10,000-$20,000
$20,000-$30,000
$30,000-$50,000
$50,000-$75,000
$75,000 - $100,000
$100,000-$200,000
$200,000-$500,000
$500,000$1,000,000
More than
$1,000,000
21
effective federal income tax rate averages less than 10% of income. For those making
more than $200,000 per year, the federal income tax averages more than 20% of income.
The federal income tax is clearly progressive because those with higher incomes
generally pay a larger share of their income for the tax. For a regressive tax, effective tax
rates tend to decrease as income increases. If effective tax rates are constant at different
income levels, then a tax is proportional.
Looking at effective tax rates by income categories can normally determine whether a tax
is progressive or regressive. However, there may be some cases where effective tax rates
do not follow a consistent pattern across income levels. For example, suppose that
effective taxes first increase but then decrease as we move up the income spectrum.
Another limitation with data on effective tax rates is that this approach does not tell us the
degree of progressivity or regressivity. We might not be able to determine whether one
tax is more progressive than another or whether a particular tax becomes more or less
progressive over time.
Researchers have come up with several tax indices that measure the progressivity of a tax
as a single number. These indices allow direct comparisons across different tax types and
across time. The most common tax progressivity index is discussed in Box 2.
Effective Tax Rates in the United States
Data on the distribution of taxes in the U.S. are available from several sources. The
government sources that publish data on tax distribution include the Internal Revenue
Service (IRS), the Joint Committee on Taxation (JCT), the Congressional Budget Office
(CBO), and the Office of Tax Analysis within the U.S. Treasury. The IRS data are the
most detailed but focus on federal income and estate taxes. The IRS publishes data on
corporate taxes but does not conduct tax incidence analysis. The JCT occasionally
conducts tax incidence analyses but only on the federal income tax, payroll taxes, and
federal excise taxes. The CBO adds the incidence of federal corporate taxes to their
analyses but still omits the federal estate tax and all state and local taxes.
The only source for tax incidence data for all taxes in the U.S. is Citizens for Tax Justice
(CTJ), a non-profit organization. CTJ uses data from government sources but has
developed its own models of tax incidence. Comparison of tax progressivity data from
CTJ with data from the federal sources listed above indicates that their results are
generally similar to the government’s results and not biased in either direction (Roach,
2003).
22
BOX 2. MEASURING TAX PROGRESSIVITY – THE SUITS INDEX
The Suits Index, developed by Daniel Suits in the 1970s (Suits, 1977), calculates a single
number that measures tax progressivity. The approach basically compares the cumulative
share of income received by taxpayers, order from lowest to highest, to their cumulative
share of taxes paid. For a progressive (regressive) tax, the share of taxes paid will tend to
be less (more) than the share of income as we move up the income spectrum. Other tax
progressivity indices have been developed but the Suits Index remains the most widely
used approach (Anderson, et al., 2003).
While the calculation details are not presented here, the Suits Index is a number ranging
between –1 and +1. A negative Suits Index means that the tax is regressive while a
positive index indicates a progressive tax (with a value of zero for a proportional tax).
The Suits Index can be used to compare the degree of progressivity of different tax types
as well as determine whether a tax becomes more or less progressive over time.
The Suits Index has been used to estimate the progressivity of different tax types in the
U.S. for 2007 (Roach, 2010). Table 2.1 shows that the U.S. tax system contains a mixture
of progressive and regressive taxes. The federal estate tax is the most progressive tax
while the federal corporate and income taxes are also progressive. On the other hand,
federal excise taxes are the most regressive. Federal social insurance taxes and overall
state and local taxes are also regressive. When all federal taxes are considered, the Suits
Index of +0.18 indicates that federal taxation is progressive. The entire U.S. tax system is
also progressive, but the recent Suits Indices of +0.05 and +0.06 are closer to a value of
zero (a proportional tax) than just the federal tax system.
Table 2.1. Suits Index Estimates of the U.S. Tax System, 2007, by Tax Type1
Tax Type
Federal Income
Federal Social Insurance
Federal Excise
Federal Corporate
Federal Estate and Gift
State and Local
Total Federal
All U.S. Taxes (2001 data)
All U.S. Taxes (2004 data)
All U.S. Taxes (2009 data)
Suits Index
+0.42
-0.20
-0.31
+0.51
+0.63
-0.12
+0.18
+0.09
+0.05
+0.06
__________________
1 – The Suits Index for the federal estate and gift tax is based upon 2008 data.
23
Table 3 presents the tax distribution data from CTJ for 2009. We see that while the
federal tax system is progressive, the state and local tax system is, on average, regressive.
Overall, the tax system in the U.S. is progressive, although the rate of progressivity levels
off at upper income levels and actually reverses at the highest income level in Table 3.
Table 3. Effective Tax Rates, 2009 19
Effective Tax Rates
Income
Group
Average
Income
Lowest 20%
$12,400
Second 20%
$25,000
Third 20%
$40,000
Fourth 20%
$66,000
Next 10%
$100,000
Next 5%
$141,000
Next 4%
$245,000
Top 1%
$1,328,000
ALL
$68,900
Federal Taxes
3.6%
8.7%
13.9%
17.2%
19.0%
20.4%
21.3%
22.3%
18.0%
State & Local Taxes
12.4%
11.8%
11.3%
11.3%
11.1%
10.8%
10.2%
8.4%
10.6%
All Taxes
16.9%
20.5%
25.3%
28.5%
30.2%
31.2%
31.6%
30.8%
28.6%
Tax Progressivity over Time
Consistent data are generally not available to determine how the entire tax burden in the
U.S. has shifted over time. Most analyses are limited to one, or a few, tax types. Further,
interest groups can interpret the available data to support their particular agendas. For an
illustration about how the same tax data can be used to support different claims, see Box
3.
Analysis of tax progressivity over time indicates that the federal tax system is about as
progressive now as it was in the late 1970s (Roach, 2010). The progressivity of the
federal tax system declined during the early 1980s, rose in 1987 (the year following the
passage of the Tax Reform Act of 1986), either remained stable or rose slightly up to the
mid-200s, and decreased slightly since the mid-200s.
Complete data on the distribution of state and local taxes are available from Citizens for
Tax Justice for 1995, 2002, 2007, and 2009, with Suits Indices of -0.11, -0.07, -0.12, and
-0.07 respectively. Thus the available data suggest no obvious overall trend in the
regressivity of state and local taxes. The unavailability of consistent data on the
distribution of state and local taxes makes determination of the trends in the overall U.S.
19
Data from CTJ, 2010.
24
tax system difficult to determine. As Table 2.1 indicated, total taxes declined in
progressivity from 2001 to 2004, and then stayed about the same from 2004 to 2009.
BOX 3. INTERPRETING TAX PROGRESSIVITY DATA
Has the federal income tax burden on the very wealthy been increasing or decreasing in
recent decades? Data published by the CBO reveals that the percent of federal income
taxes paid by the highest-income taxpayers has increased steady over the past few
decades. In 1979, the top 1% of taxpayers paid about 18.3% of all federal income taxes.
In 2007, the top 1% of taxpayers paid over 39.5%. Clearly, these data suggest that the
federal income tax has become much more progressive since 1979.
However, these statistics represent an incomplete analysis. Specifically, it fails to
consider how the proportion of income accruing to the top 1% has changed over the same
time period. The increasing tax share paid by high-income taxpayers may be a function of
an increase in income, rather than a change in the tax system. In other words, if the share
of all income received by the top 1% increased, we would naturally expect that their share
of taxes paid would also increase without any changes in the underlying progressivity of
the tax system. Income statistics indicate that the share of income going to the top 1% of
taxpayers has also increased significantly since 1979. The top 1% of taxpayers received
less than 9.2% of income in 1979 but more than 19.4% in 2007. Based on this fact alone,
we would expect the top 1% to be paying a greater share of all federal income taxes.
So, has the federal income tax burden on the top 1% increased or decreased since 1979?
We can combine the tax and income data for a more complete analysis. The share of
income going to the top 1% increased by a factor of 2.1 between 1979 and 2007.
Meanwhile, their share of taxes paid has increased by a factor of 2.2. This suggests that
the share of taxes paid by the top 1% has risen by about as much as much as their share of
income – indicating a relatively stable degree of tax progressivity in the federal income
tax – a dramatically different conclusion had we only considered data on tax shares!
25
References
Brownlee, W. Elliot. 1996. Federal Taxation in America. University of Cambridge Press:
Cambridge.
Chaptman, Dennis. 2003 “States' Budget Troubles Worsening, Report Finds,” Milwaukee
Journal Sentinel, Feb. 5, 2003.
Citizens for Tax Justice, Institute on Taxation & Economic Policy. 2003a. “Who Pays? A
Distributional Analysis of the Tax Systems in All 50 States, 2nd Edition,” January 2003,
http://www.itepnet.org/wp2000/text.pdf.
Citizens for Tax Justice. 2010. “All Americans Pay Taxes,” April 15, 2010.
http://www.ctj.org/pdf/taxday2010.pdf.
Citizens for Tax Justice. 2003b. “Final Tax Plan Tilts Even More Towards Richest,”
June 5, 2003 press release, http://www.ctj.org/pdf/sen0522.pdf.
Citizens for Tax Justice. 2002. “White House Reveals Nation’s Biggest Problems: The
Very Rich Don’t Have Enough Money & Workers Don’t Pay Enough in Taxes,”
December 16, 2002 press release, http://www.ctj.org/pdf/flat1202.pdf.
Citizens for Tax Justice. 2001. “Final Version of Bush Tax Plan Keeps High-End Tax
Cuts, Adds to Long-Term Cost,” May 26, 2001 press release,
http://www.ctj.org/html/gwbfinal.htm.
Congressional Budget Office, “Effective Federal Tax Rates, 2005,” December 2008.
Fullerton, Don, and Gilbert E. Metcalf, 2002. “Tax Incidence,” National Bureau of
Economic Research Working Paper 8829.
IRS (Internal Revenue Service). Various Years. Statistics of Income, Individual Income
Tax Returns. Washington, D.C.
IRS (Internal Revenue Service). 2002. “Personal Exemptions and Individual Income Tax
Rates, 1913-2002.” Statistics of Income Bulletin Data Release, June 2002.
Johnson, Charles M. 2002. “Finding their Balance?” Missoulian, December 8, 2002.Joint
Committee on Taxation. 2001. “Updated Distribution of Certain Federal Tax Liabilities
by Income Class for Calendar Year 2001,” JCX-65-01.Krugman, Paul. 2002. “For
Richer,” The New York Times, October 20, 2002, section 6, page 62.
Krugman, Paul. 2001. Fuzzy Math: The Essential Guide to the Bush Tax Cut Plan, W.W.
Norton & Company: New York.
26
Martin, Cathie J. 1991. Shifting the Burden: The Struggle over Growth and Corporate
Taxation. The University of Chicago Press: Chicago.
Metcalf, Gilbert E. and Don Fullerton. 2002. “The Distribution of Tax Burdens: An
Introduction,” National Bureau of Economic Research Working Paper 8978.
OECD (Organisation for Economic Co-operation and Development). 2010. “More
Information on Environmentally Related Taxes, Fees and Charges,”
http://www2.oecd.org/ecoinst/queries/index.htm.
OMB (Office of Management and Budget). 2003. “Historical Tables, Budget of the
United States Government, Fiscal Year 2004.” Washington, D.C.
Peterson, Wallace C. 1999. The Social Security Primer: What Every Citizen Should
Know. M.E. Sharpe: Armonk, NY.
Petska, Tom, and Mike Strudler. 1999. “The Distribution of Individual Income and
Taxes: A New Look at an Old Issue.” Paper presented at the 1999 American Economics
Association conference, January 3-5, 1999, New York,
http://www.irs.gov/taxstats/article/0,,id=112309,00.html.
Roach, Brian. 2010. “Progressive and Regressive Taxation in the United States: Who’s
Really Paying (and Not Paying) their Fair Share?” Global Development And
Environment working paper 10-07, December 2010.
Roach, Brian. 2003. “Progressive and Regressive Taxation in the United States: Who’s
Really Paying (and Not Paying) their Fair Share?” Global Development And
Environment working paper 03-10, October 2003.
Skidmore, Max J. 1999. Social Security and Its Enemies. Westview Press: Boulder, CO.
Tax Policy Center. 2010. “Wealth Transfer Taxes: Who Pays the Estate Tax?” The Tax
Policy Briefing Book, http://www.taxpolicycenter.org/briefing-book/keyelements/estate/who.cfm.
Tax Policy Center. 2008. “Estate Tax Returns and Liability Under Current Law and
Various Reform Proposals, 2008-2018,” Table T08-0264, October 20, 2008.
Tempalski, Jerry. 1998. “Revenue Effects of Major Tax Bills.” Office of Tax Analysis
Working Paper 81, December 1998.
U.S. Census Bureau. 2003. “Historical Income Tables - Income Equality, Table IE-1,”
http://www.census.gov/hhes/income/histinc/ie1.html.
U.S. Census Bureau. 2010. The 2010 Statistical Abstract of the United States.
Washington, D.C.
27
U.S. Census Bureau. Various Years. Statistical Abstract of the United States.
Washington, D.C.
U.S. Census Bureau. 1960. Historical Statistics of the United States, Colonial Times to
1957. Washington, D.C.
28
MODULE SUMMARY
•
The overall tax system in the United States is progressive, meaning that effective
tax rates tend to increase as income increases. Progressive taxation is based on
the view that higher-income taxpayers can pay higher tax rates without having to
forego life’s basic necessities. Progressive taxation can also redress economic
inequalities and collect a given level of revenue while maintaining the maximum
level of economic growth.
•
The federal income tax is the most complicated and debated tax in the U.S. tax
system. The federal income tax is progressive, with increasing marginal tax rates.
Federal income taxes are calculated based on taxable income, which is less than
total income because various exemptions and deductions are allowed.
•
The federal tax system in the U.S. also includes social insurance, corporate,
excise, estate, and gifts taxes. Social insurance and excise taxes are regressive
while corporate, estate, and gift taxes are progressive. The U.S. tax system also
includes state and local taxes, primarily sales, income, and property taxes.
•
Nearly 70% of the taxes levied in the U.S. are collected at the federal level. The
largest federal tax is the income tax, closely followed by social insurance taxes.
The most significant non-federal tax is property taxes, followed by sales and
income taxes.
•
Up until the early 1900s, the U.S. tax system primarily relied on excise taxes and
tariffs for public revenues. The 16th Amendment, ratified in 1913, created the
legal basis for federal income taxation, which up to that point had been prohibited
under the Constitution.
•
Both World Wars led to significant changes in the structure and overall magnitude
of taxes in the U.S. By the end of World War II, U.S. taxes were broad-based but
progressive and dominated by federal-level taxation.
•
Tax cuts passed during the Reagan Administration in the 1980s were based on the
theory that lower tax rates would spur economic growth, leading to a net increase
in tax revenues. This theory was not supported by the evidence, eventually
leading to tax increases in the early 1990s. The Bush tax cuts passed in 2001 and
2003 have made federal taxes less progressive.
•
Tax revenues in the U.S. increased dramatically during the 20th century, even after
adjusting for inflation. When measured as a percentage of GDP, tax revenues
grew significantly during World War II, grew at a slower pace afterwards, and
leveled off recently at around 30% of GDP.
29
•
Measuring the distribution of taxes requires tax incidence analysis, which
determines the ultimate burden of a tax on taxpayers. Tax incidence analysis
generally concludes that social insurance taxes fall on workers, corporate taxes
fall on the owners of capital, excise taxes fall on consumers, and property taxes
are passed on to renters.
•
Effective tax rates measured by income level can be used to determine whether a
particular tax is progressive or regressive. While the U.S. tax system contains
both progressive and regressive taxes, the overall system is progressive. Recent
data suggest that federal taxes are becoming less progressive while state and local
taxes are becoming more regressive.
30
DISCUSSION QUESTIONS
1. Comment on the following statement: “The fairest type of tax system is one in
which everyone pays the same rate of taxation, regardless of income.” Do you
agree or disagree with the statement? Why?
2. Suppose you could set the overall effective tax rates across different levels of
income. What do you think should be the appropriate effective tax rates for a
household of four (two adults and two children) with an income of $25,000? An
income of $60,000? An income of $100,000? An income of $500,000? Is the
system you devise more or less progressive than the tax system currently in place
in the U.S.? How does your system compare with others in your class?
3. The U.S. tax system is currently comprised of many different types of taxes
(income, social insurance, corporate, sales, property, etc.). What reasons could be
given to support the use of many different tax types in a nation? Do you think
that a nation’s tax system should be comprised of many different types of taxes or
just one type of tax? If you had to choose just one type of tax to levy in a nation,
what type of tax would you choose? Why?
4. Comment on the following statement: “As long as a tax cut reduces taxes for
everyone, then everyone will be better off as a result of the tax cut.” Do you
agree with this statement? Why or why not?
5. Using the Internet or other sources, look up information about basic structure of
the tax system in place in a country other than the United States. What
differences are evident in that country’s tax system? Do you think that country
has a more or less progressive tax system? Which nation’s tax system is
preferable to you? Why?
6. Locate a recent news story about a proposal for a change to the tax system, either
at the federal or state level. Summarize the proposed change. Would the change
increase or decrease tax progressivity? Who would benefit most from the
proposal? Who would be hurt the most from the proposal? Do you support the
proposal? Why or why not?
31
ADDITIONAL RESOURCES
•
All the federal government agencies that work on tax issues maintain web sites that
provide tax data and reports. The IRS’s Statistics of Income Bulletins, published four
times a year, can be found dating back to 1998 at
http://www.irs.gov/taxstats/article/0,,id=117514,00.html. The SOI Bulletins provide
data analysis of primarily individual and corporate taxes. Publications produced by
the Joint Committee on Taxation can be found at
http://www.jct.gov/publications.html. Publications by the Congressional Budget
Office related to tax issues, going as far back as the 1970s, are available at
http://www.cbo.gov/publications/bysubject.cfm?cat=33. Finally, tax analysis by the U.S.
Treasury Department, only dating back to 2001, can be found at
http://www.treasury.gov/resource-center/tax-policy/Pages/default.aspx.
•
A large amount of tax-related data is published annually in the Statistical Abstract of
the United States. Each year’s edition includes a chapter on state and local
government finances and another chapter on federal government finances. The
Census Bureau has recently added select historical editions of the Statistical Abstract
dating as far back as 1878, although online availability is more complete for the first
half of the 20th century than the latter half of the century (see
http://www.census.gov/compendia/statab).
•
Citizens for Tax Justice publishes many other tax analyses besides those referenced in
this module. Their web site is www.ctj.org. Two other non-profit organizations that
conduct tax analysis are the Tax Policy Center, a joint venture of the Urban Institute
and Brookings Institution, and the Center for Budget and Policy Priorities. The Tax
Policy Center (www.taxpolicycenter.org) publishes several reports each month on a
wide range of tax issues, including distributional impacts and public budget
implications. The CBPP (www.cbpp.org) research focuses on “fiscal policy and
public programs that affect low- and moderate-income families and individuals.”
Similar to the Tax Policy Center, the CBPP conducts distributional analyses of
current tax proposals.
•
For an opposing view on tax issues, the Tax Foundation (www.taxfoundation.org)
publishes tax analyses that generally support lower overall taxes and conclude that the
distributional impacts of recent tax cuts are fair. A similar organization, with a more
activist agenda, is Americans for Tax Reform (www.atr.org).
32
KEY TERMS AND CONCEPTS
Ability-to-pay principle: the idea that higher-income households and individuals should
pay higher tax rates than lower-income taxpayers because they are more able to bear the
tax without foregoing life’s basic necessities.
Adjusted gross income (AGI): the total income of a household or individual minus
certain out-of-pocket expenses such as retirement account contributions, student loan
interest, tuition, and other allowable subtractions. AGI is calculated on one’s federal tax
return.
Effective tax rate: one’s total taxes paid divided by some measure of income, such as
total income, adjusted gross income, or taxable income.
Environmental taxes: taxes levied on a good or service based on the environmental
impact of its production or consumption.
Estate taxes: taxes on the transfer of large estates to beneficiaries.
Excise taxes: taxes on the production, sale, or use of a particular commodity.
Exemptions: an amount excluded from taxation based on the number of tax filers and
dependents.
Gift taxes: taxes levied on large gifts; gift taxes are designed to prevent taxpayers from
avoiding estate taxes by giving away their assets while alive.
Itemized deductions: certain expenses excluded from federal taxation, including
mortgage interest, state taxes, gifts to charity, real estate taxes, and major medical
expenses. A taxpayer is allowed to deduct either the standard or itemized deduction,
whichever is larger.
Marginal propensity to consume: the proportion of a marginal income increase that is
spent on consumption goods and services, as opposed to invested or saved.
Marginal tax rates: a tax system where a single taxpayer can pay different tax rates on
successive portions of income.
National consumption tax: a federal-level tax paid on the dollar amount a household or
individual spends each year on goods and services, calculated using either a single tax
rate or marginal tax rates.
National sales tax: a federal-level tax paid on the purchase of certain goods and services,
calculated as a percentage of the selling price.
33
Perfect competition: an idealized market structure characterized by many informed
small firms with no market power selling undifferentiated products and with complete
freedom to enter or exit the market.
Progressive tax: a tax in which the percentage of income one pays for the tax increases
as one’s income increases.
Proportional tax: a tax in which the percentage of income one pays for the tax is
constant regardless of income level.
Regressive tax: a tax in which the percentage of income one pays for the tax decreases as
one’s income increases.
Social insurance taxes: taxes paid to support social insurance programs such as Social
Security, Medicare, and Medicaid.
Standard deduction: a fixed amount of income excluded from federal taxation based on
filing status (single, married, etc.). A taxpayer is allowed to deduct either the standard or
itemized deduction, whichever is larger.
Suits index: an index developed by Daniel Suits in the 1970s to measure the overall
progressivity or regressivity of a tax.
Tariffs: taxes levied on imported goods and services.
Tax incidence analysis: estimating the ultimate financial burden of various taxes on
different categories of households by tracing a tax’s impact on market prices and the
economic behavior of consumers and businesses.
Taxable income: the amount of income used as the basis for determine one’s income
taxes. For federal income taxes, taxable income is equal to adjusted gross income (AGI)
minus allowable deductions and exemptions.
Total income: the total income a household or individual receives from all sources
Value-added tax: a tax levied at each stage in the production process of a good or
service.
Wealth taxes: taxes levied on the value of one’s assets such as real estate, investments,
cash, and other personal property.
34
|
ONLY USE THE DATA I PROVIDE
Limit your response to 250 words
If you cannot answer using the contexts alone, say "I cannot determine the answer to that due to lack of context" | According to the context document, what states could exceed $100 billion in health benefits from implementation of of zero-emission transportation and electricity resources? | **Zeroing in on Healthy Air**
Executive Summary
Zeroing in on Healthy Air is a report by the American Lung Association illustrating the public health urgency of policies and investments for transitioning to zero-emission transportation and electricity generation in the coming decades. These sectors are leading sources of unhealthy air in the United States. Today, over four in ten Americans — more than 135 million people — live in communities impacted by unhealthy levels of air pollution. Research demonstrates that the burdens of unhealthy air include increased asthma attacks, heart attacks and strokes, lung cancer and premature death. These poor health outcomes are not shared equitably, with many communities of color and lower income communities at greater risk due to increased exposure to transportation pollution. The transportation sector is also the largest source of greenhouse gas emissions that drive climate change, which threatens clean air progress and amplifies a wide range of health risks and disparities. This report finds that a national shift to 100 percent sales of zero-emission passenger vehicles (by 2035) and medium- and heavy-duty trucks (by 2040), coupled with renewable electricity would generate over $1.2 trillion in public health benefits between 2020 and 2050. These benefits would take the form of avoiding up to 110,000 premature deaths, along with nearly 3 million asthma attacks and over 13 million workdays lost due to cleaner air. This report calculates the emission reductions possible from shifting to vehicles without tailpipes, as well as eliminating fuel combustion from the electricity generation sector so that neither those living near roads or near electricity generation would be subjected to unacceptable doses of toxic air pollution. The report also highlights the fact that the shift to zeroemission transportation and electricity generation in the United States will yield avoided global climate damages over $1.7 trillion. By expediting investments and policies at the local, state and federal levels to reduce harmful pollution, all communities stand to experience cleaner air. Policies and investments must prioritize low-income communities and communities of color that bear a disproportionate pollution burden. State and local jurisdictions should act to implement policies as soon as possible, including in advance of the benchmarks used in this report’s methodology. These actions are needed to achieve clean air, reduce health disparities and avoid even more dire consequences of climate change.
The Public Health Need for Zero Emissions Air Pollution Remains a Major Threat to Americans’ Health Despite decades of progress to clean the air, more than 4 in 10 of all Americans — 135 million — still live in a community impacted by unhealthy levels of air pollution.ii Those impacted by polluted air face increased risk of a wide range of poor health outcomes as the result of increased ozone and/or particle pollution.iii The adverse impacts of pollution from the transportation and electricity generation sectors are clear, and must be recognized as a threat to local community health, health equity and a driver of major climate change-related health risks. Even with certification to meet existing standards, it is clear that combustion technologies often generate far greater levels of pollution in the real world than on paper.
Location Matters: Disparities in Exposure Burden Exposure to pollution with its associated negative health consequences is dictated by where someone lives, attends school or works. In general, the higher the exposure, the greater the risk of harm. Many communities face disproportionate burdens due to pollution generated from production, transportation, refining and combustion of fuels along the transportation and electricity generating systems. Lower income communities and communities of color are often the most over-burdened by pollution sources today due to decades of inequitable land use decisions and systemic racism. The American Lung Association’s State of the Air 2021 report illustrated the disparities in pollution burdens across the United States, noting that a person of color in the United States is up to three times more likely to be breathing the most polluted air than white people.v All sources of harmful air and climate pollution must shift rapidly away from combustion and toward zero-emission technologies to ensure all Americans have access to the benefits of less polluting technologies.
Estimated Benefits of Zero-Emission Transportation and Electricity Generation The combustion of fuels in the electricity generation and transportation sectors is a major contributor to the health and climate burdens facing all Americans. These sources of pollution also create significant disparities in pollution burdens and poor health, especially in lower-income communities and communities of color. The transition to non-combustion technologies is underway and must continue to accelerate to protect the health of communities today and across the coming decades. Key findings are presented below: Pollution Reduction Benefits from Zero-Emission Transportation Accelerating the shift to zero-emission transportation and non-combustion electricity generation will generate major reductions in harmful pollutants. Key pollutants included in this research are described below along with projected onroad pollution reductions with the shift to zero-emission technologies when compared with a modeled “Business As Usual” case for the on-road fleet.
Benefits of Moving All Vehicle Classes to Zero-Emissions All vehicles must move to zero-emission technologies to ensure the most robust public health benefits occur. The 2020 passenger vehicle fleet represents approximately 94 percent of the nation’s on-road vehicle fleet and generates over 1 million tons of ozone- and particle-forming NOx emissions, and over 33,400 tons of fine particles annually. Heavy-duty vehicles represent approximately six percent of the on-road fleet in 2020, but generate 59 percent of ozone- and particle-forming NOx emissions and 55 percent of the particle pollution (including brake and tire particles). Differentiating the relative impacts of fleet segments is particularly important when considering the concentrations of heavy-duty vehicles in environmental justice areas near highways, ports, railyards and warehouse settings. For greenhouse gases (GHG), the 2020 light duty vehicle fleet generates approximately 69 percent of GHG emissions, while the heavy-duty fleet produces 31 percent. The table below illustrates the relative emission reduction benefits of on-road transportation electrification for each the light-duty fleet and the medium- and heavy-duty segments compared with the “Business-As-Usual” case. It is important to note that these on-road reductions could yield major benefits within each class, with light-duty vehicles reducing nearly twice the GHGs as heavy-duty, while heavy-duty engines could yield approximately eight times the smog- and particle-forming NOx emissions when compared with the light-duty fleet. Ultimately, all segments produce harmful pollutants and must move quickly to zero-emissions to protect health and reduce climate pollution.
National Results: Public Health and Climate Benefits The shift to zero-emission transportation and non-combustion electricity generation could yield major health benefits throughout the nation in the coming decades. Cumulatively, the national benefits of transitioning away from combustion in the transportation sector toward 100 percent zero-emission sales and a non-combustion electricity generation sector could generate over $1.2 trillion in health benefits across the United States between 2020 and 2050. These benefits include approximately 110,000 lives saved, over 2.7 million asthma attacks avoided (among those aged 6-18 years), 13.4 million lost works days and a wider range of other negative health impacts avoided due to cleaner air.1,2 In addition to these health benefits, this analysis found that over $1.7 trillion in global climate benefits could be achieved with a reduction of over 24 billion metric tons of GHGs by mid-century.
Near-Term Health Benefits While the benefits noted above are cumulative between 2020 and 2050, this analysis also finds that annual health benefits could reach into the tens of billions by the end of this decade – nearly $28 billion in 2030 alone. Health benefits increase significantly as deployments of zero-emission technologies in the transportation and electricity generating sectors expand.
State Results: Public Health Benefits Across the United States Every state in the U.S. stands to experience significant public health benefits from the widespread implementation of zero-emission transportation and electricity resources over the coming decades. As shown below, more than half of the states could experience more than $10 billion in cumulative public health benefits. Two states (California and Texas) could exceed $100 billion in health benefits, and six more states (Pennsylvania, Florida, Ohio, New York, Illinois, and Michigan) could see benefits exceeding $50 billion by 2050. These benefits cover a wide range of avoided health impacts, three of which (premature deaths, asthma attacks, lost workdays) are shown in the table below.
Local Results: Public Health Benefits Across America Communities across the United States stand to benefit from the widespread transition to zero-emission transportation and electricity generation. As transportation emissions are a dominant source of local exposures in many communities, a carefully and equitably designed shift to non-combustion transportation can mean cleaner air for all, and especially those most burdened by pollution from these sources today. Similarly, a shift away from fossil-fueled electricity generation is critical to improving the health of those most impacted by emissions from power plants, including in lower-income, rural communities across the United States. This analysis found that the 100 U.S. counties (roughly 3 percent of all counties assessed) with the highest percent populations of People of Color could experience approximately 13 percent of the cumulative health benefits of this transition ($155 billion, between 2020-2050). Expanding this further, the 500 U.S. Counties (16 percent of counties assessed) with the highest percent populations of People of Color could experience 40 percent of the benefits, or $487 billion cumulatively between 2020 and 2050. It is also clear that the presence of benefits within these counties does not directly translate to benefits to individual neighborhoods or residents, however. This is an indicator of the urgent need to center equity in policies and investments to ensure access to the benefits of pollution-free mobility and power. Additional analysis of the benefits in rural communities, lower-income communities, and neighborhood exposure levels could provide deeper insights into more equitable policy and investment designs. At a broader scale, this analysis shows a leveling of benefits across the country as the locations of power plants and transportation hubs are often impacting communities with varying socioeconomic characteristics. As shown in the table on the next page, communities across the United States could experience billions in public health benefits, and significantly reduce premature deaths, asthma attacks and other negative health consequences of polluted air through 2050. The table includes the 25 Metropolitan Areas across the United States showing the largest cumulative health benefits by 2050 considering the shift to non-combustion electricity generation and zero emission transportation. | {Text Passage}
**Zeroing in on Healthy Air**
Executive Summary
Zeroing in on Healthy Air is a report by the American Lung Association illustrating the public health urgency of policies and investments for transitioning to zero-emission transportation and electricity generation in the coming decades. These sectors are leading sources of unhealthy air in the United States. Today, over four in ten Americans — more than 135 million people — live in communities impacted by unhealthy levels of air pollution. Research demonstrates that the burdens of unhealthy air include increased asthma attacks, heart attacks and strokes, lung cancer and premature death. These poor health outcomes are not shared equitably, with many communities of color and lower income communities at greater risk due to increased exposure to transportation pollution. The transportation sector is also the largest source of greenhouse gas emissions that drive climate change, which threatens clean air progress and amplifies a wide range of health risks and disparities. This report finds that a national shift to 100 percent sales of zero-emission passenger vehicles (by 2035) and medium- and heavy-duty trucks (by 2040), coupled with renewable electricity would generate over $1.2 trillion in public health benefits between 2020 and 2050. These benefits would take the form of avoiding up to 110,000 premature deaths, along with nearly 3 million asthma attacks and over 13 million workdays lost due to cleaner air. This report calculates the emission reductions possible from shifting to vehicles without tailpipes, as well as eliminating fuel combustion from the electricity generation sector so that neither those living near roads or near electricity generation would be subjected to unacceptable doses of toxic air pollution. The report also highlights the fact that the shift to zeroemission transportation and electricity generation in the United States will yield avoided global climate damages over $1.7 trillion. By expediting investments and policies at the local, state and federal levels to reduce harmful pollution, all communities stand to experience cleaner air. Policies and investments must prioritize low-income communities and communities of color that bear a disproportionate pollution burden. State and local jurisdictions should act to implement policies as soon as possible, including in advance of the benchmarks used in this report’s methodology. These actions are needed to achieve clean air, reduce health disparities and avoid even more dire consequences of climate change.
The Public Health Need for Zero Emissions Air Pollution Remains a Major Threat to Americans’ Health Despite decades of progress to clean the air, more than 4 in 10 of all Americans — 135 million — still live in a community impacted by unhealthy levels of air pollution.ii Those impacted by polluted air face increased risk of a wide range of poor health outcomes as the result of increased ozone and/or particle pollution.iii The adverse impacts of pollution from the transportation and electricity generation sectors are clear, and must be recognized as a threat to local community health, health equity and a driver of major climate change-related health risks. Even with certification to meet existing standards, it is clear that combustion technologies often generate far greater levels of pollution in the real world than on paper.
Location Matters: Disparities in Exposure Burden Exposure to pollution with its associated negative health consequences is dictated by where someone lives, attends school or works. In general, the higher the exposure, the greater the risk of harm. Many communities face disproportionate burdens due to pollution generated from production, transportation, refining and combustion of fuels along the transportation and electricity generating systems. Lower income communities and communities of color are often the most over-burdened by pollution sources today due to decades of inequitable land use decisions and systemic racism. The American Lung Association’s State of the Air 2021 report illustrated the disparities in pollution burdens across the United States, noting that a person of color in the United States is up to three times more likely to be breathing the most polluted air than white people.v All sources of harmful air and climate pollution must shift rapidly away from combustion and toward zero-emission technologies to ensure all Americans have access to the benefits of less polluting technologies.
Estimated Benefits of Zero-Emission Transportation and Electricity Generation The combustion of fuels in the electricity generation and transportation sectors is a major contributor to the health and climate burdens facing all Americans. These sources of pollution also create significant disparities in pollution burdens and poor health, especially in lower-income communities and communities of color. The transition to non-combustion technologies is underway and must continue to accelerate to protect the health of communities today and across the coming decades. Key findings are presented below: Pollution Reduction Benefits from Zero-Emission Transportation Accelerating the shift to zero-emission transportation and non-combustion electricity generation will generate major reductions in harmful pollutants. Key pollutants included in this research are described below along with projected onroad pollution reductions with the shift to zero-emission technologies when compared with a modeled “Business As Usual” case for the on-road fleet.
Benefits of Moving All Vehicle Classes to Zero-Emissions All vehicles must move to zero-emission technologies to ensure the most robust public health benefits occur. The 2020 passenger vehicle fleet represents approximately 94 percent of the nation’s on-road vehicle fleet and generates over 1 million tons of ozone- and particle-forming NOx emissions, and over 33,400 tons of fine particles annually. Heavy-duty vehicles represent approximately six percent of the on-road fleet in 2020, but generate 59 percent of ozone- and particle-forming NOx emissions and 55 percent of the particle pollution (including brake and tire particles). Differentiating the relative impacts of fleet segments is particularly important when considering the concentrations of heavy-duty vehicles in environmental justice areas near highways, ports, railyards and warehouse settings. For greenhouse gases (GHG), the 2020 light duty vehicle fleet generates approximately 69 percent of GHG emissions, while the heavy-duty fleet produces 31 percent. The table below illustrates the relative emission reduction benefits of on-road transportation electrification for each the light-duty fleet and the medium- and heavy-duty segments compared with the “Business-As-Usual” case. It is important to note that these on-road reductions could yield major benefits within each class, with light-duty vehicles reducing nearly twice the GHGs as heavy-duty, while heavy-duty engines could yield approximately eight times the smog- and particle-forming NOx emissions when compared with the light-duty fleet. Ultimately, all segments produce harmful pollutants and must move quickly to zero-emissions to protect health and reduce climate pollution.
National Results: Public Health and Climate Benefits The shift to zero-emission transportation and non-combustion electricity generation could yield major health benefits throughout the nation in the coming decades. Cumulatively, the national benefits of transitioning away from combustion in the transportation sector toward 100 percent zero-emission sales and a non-combustion electricity generation sector could generate over $1.2 trillion in health benefits across the United States between 2020 and 2050. These benefits include approximately 110,000 lives saved, over 2.7 million asthma attacks avoided (among those aged 6-18 years), 13.4 million lost works days and a wider range of other negative health impacts avoided due to cleaner air.1,2 In addition to these health benefits, this analysis found that over $1.7 trillion in global climate benefits could be achieved with a reduction of over 24 billion metric tons of GHGs by mid-century.
Near-Term Health Benefits While the benefits noted above are cumulative between 2020 and 2050, this analysis also finds that annual health benefits could reach into the tens of billions by the end of this decade – nearly $28 billion in 2030 alone. Health benefits increase significantly as deployments of zero-emission technologies in the transportation and electricity generating sectors expand.
State Results: Public Health Benefits Across the United States Every state in the U.S. stands to experience significant public health benefits from the widespread implementation of zero-emission transportation and electricity resources over the coming decades. As shown below, more than half of the states could experience more than $10 billion in cumulative public health benefits. Two states (California and Texas) could exceed $100 billion in health benefits, and six more states (Pennsylvania, Florida, Ohio, New York, Illinois, and Michigan) could see benefits exceeding $50 billion by 2050. These benefits cover a wide range of avoided health impacts, three of which (premature deaths, asthma attacks, lost workdays) are shown in the table below.
Local Results: Public Health Benefits Across America Communities across the United States stand to benefit from the widespread transition to zero-emission transportation and electricity generation. As transportation emissions are a dominant source of local exposures in many communities, a carefully and equitably designed shift to non-combustion transportation can mean cleaner air for all, and especially those most burdened by pollution from these sources today. Similarly, a shift away from fossil-fueled electricity generation is critical to improving the health of those most impacted by emissions from power plants, including in lower-income, rural communities across the United States. This analysis found that the 100 U.S. counties (roughly 3 percent of all counties assessed) with the highest percent populations of People of Color could experience approximately 13 percent of the cumulative health benefits of this transition ($155 billion, between 2020-2050). Expanding this further, the 500 U.S. Counties (16 percent of counties assessed) with the highest percent populations of People of Color could experience 40 percent of the benefits, or $487 billion cumulatively between 2020 and 2050. It is also clear that the presence of benefits within these counties does not directly translate to benefits to individual neighborhoods or residents, however. This is an indicator of the urgent need to center equity in policies and investments to ensure access to the benefits of pollution-free mobility and power. Additional analysis of the benefits in rural communities, lower-income communities, and neighborhood exposure levels could provide deeper insights into more equitable policy and investment designs. At a broader scale, this analysis shows a leveling of benefits across the country as the locations of power plants and transportation hubs are often impacting communities with varying socioeconomic characteristics. As shown in the table on the next page, communities across the United States could experience billions in public health benefits, and significantly reduce premature deaths, asthma attacks and other negative health consequences of polluted air through 2050. The table includes the 25 Metropolitan Areas across the United States showing the largest cumulative health benefits by 2050 considering the shift to non-combustion electricity generation and zero emission transportation.
----------------
{Task Instructions}
ONLY USE THE DATA I PROVIDE
Limit your response to 250 words
If you cannot answer using the contexts alone, say "I cannot determine the answer to that due to lack of context"
----------------
{Query}
According to the context document, what states could exceed $100 billion in health benefits from implementation of of zero-emission transportation and electricity resources? |
Provide your response solely on the information provided in the text of the prompt. Do not use any outside information, resources or prior knowledge. Make your response exactly 300 words. | What are the key factors in competition between video streaming services? | Video streaming services that use a subscription- or transaction-based system can compete by
offering content at lower prices than their competitors. Streaming services that offer live TV can
be cheaper than packages offered by MVPDs, depending on the channels the customer subscribes
to. Some MVPDs have responded by offering cheaper plans with fewer channels and by
improving their set-top boxes to offer some streaming services, such as Netflix.7 Some streaming
services that offer live TV advertise their services by promising no hidden fees, such as
equipment rentals and cancellation fees, and no annual contracts.8
Most streaming services that offer live TV do not require an annual subscription; subscribers can
sign up on a month-to-month basis instead. While the lack of a long-term commitment can be
appealing to some consumers, it also means that prices can suddenly increase.9 For example, on
June 30, 2020, YouTube TVadded eight new channels and increased its price from $50 to $65 per
month, effective immediately for new subscribers; for its current subscribers, the changes went
into effect on July 30, 2020.10
On average, the price for streaming services that offer live TV is higher than those that offer only
video-on-demand (Table 2). This may be partially due to cost differences—it tends to be costly to
license the rights to air a television network.
11 To attract more users, a streaming service that
offers live TV may try to expand the number of networks offered on its service, but this in turn
increases the cost of running the service. In contrast, a streaming service that offers video-ondemand licenses at least some movies and shows that have been previously shown elsewhere,
which tends to lower the cost of licensing this content.
Differences in prices across streaming services that offer only video-on-demand tend to be fairly
small. This may be partially due to their relatively low prices, which make it difficult to lower
prices further. Thus, in addition to competing with prices, streaming services may seek to offer
exclusive access to popular movies and TV shows to attract new subscribers.
When the first streaming services launched in the late 2000s, they offered movies and shows that
had been previously shown elsewhere. For example, when Netflix launched its streaming service
in 2007, it offered about 1,000 television shows and movies, licensed from NBC Universal, Sony
Pictures, MGM, and others; it did not offer original content. 12 A few years later, some streaming
services started commissioning movies and shows from television or film studios. This made
streaming services less dependent on licensing agreements with television networks and allowed
them to offer original programming, which increased the importance of content. 13 In 2013,
Netflix debuted its first original series, House of Cards, and became the first streaming service to
win a Television Academy Emmy Award.14 Original programs from other streaming services have
won television awards as well, such as Hulu’s Handmaid’s Tale.
15 In 2020, Netflix received 160
Emmy nominations, breaking the record for the greatest number of nominations of any network,
studio, or streaming platform.16 Nevertheless, streaming services continue to license previously broadcast movies and shows from television networks and film studios to complement their
original content.
Some streaming services, particularly those that offer live TV, advertise themselves as an
alternative to MVPDs. However, streaming services oftentimes rely on the same content creators
as television networks, such as sports leagues and television and movie studios.
17 Television
networks and movie theaters show a single program at a time, which can create incentives to
select the program with the greatest profit potential for each time slot. In contrast, streaming
services offer multiple programs for users to choose from. Thus, streaming services can feature
content that appeals to various groups of users rather than to the public at large. This may
increase competition for video content and provide new opportunities for content creators.18
Some companies that own studios and television networks offer their own video streaming
services. This can create incentives for these studios to license fewer shows and movies to other
streaming services, reserving popular content for their own streaming services instead. For
example, AT&T, which owns Warner Brothers Studio, stopped licensing certain shows—such as
Friends, The Wire, and The Sopranos—to streaming services owned by other companies, offering
these shows exclusively on its streaming services HBO and HBO Max.19 Similarly, Comcast is
offering some of its NBCUniversal shows, such as The Office and Parks and Recreation,
exclusively on its streaming service Peacock;20 Walt Disney Co. announced that The Simpsons
would be offered exclusively on its streaming service Disney+.21
Streaming services operated by companies that also own film studios and television networks
may have an advantage over their competitors. A company may provide its streaming service
exclusive access to its studio’s programming, or may choose to license the programming to its
streaming competitors for a fee. This means some streaming services are able to restrict access to
content, which could make it more difficult for new competitors to enter the video streaming
market. Entrants may need to devote significant resources to produce or license content before
offering their streaming services to customers. | System instructions: Provide your response solely on the information provided in the text of the prompt. Do not use any outside information, resources or prior knowledge. Make your response exactly 300 words.
Question: What are the key factors in competition between video streaming services?
Context Block: Video streaming services that use a subscription- or transaction-based system can compete by
offering content at lower prices than their competitors. Streaming services that offer live TV can
be cheaper than packages offered by MVPDs, depending on the channels the customer subscribes
to. Some MVPDs have responded by offering cheaper plans with fewer channels and by
improving their set-top boxes to offer some streaming services, such as Netflix.7 Some streaming
services that offer live TV advertise their services by promising no hidden fees, such as
equipment rentals and cancellation fees, and no annual contracts.8
Most streaming services that offer live TV do not require an annual subscription; subscribers can
sign up on a month-to-month basis instead. While the lack of a long-term commitment can be
appealing to some consumers, it also means that prices can suddenly increase.9 For example, on
June 30, 2020, YouTube TVadded eight new channels and increased its price from $50 to $65 per
month, effective immediately for new subscribers; for its current subscribers, the changes went
into effect on July 30, 2020.10
On average, the price for streaming services that offer live TV is higher than those that offer only
video-on-demand (Table 2). This may be partially due to cost differences—it tends to be costly to
license the rights to air a television network.
11 To attract more users, a streaming service that
offers live TV may try to expand the number of networks offered on its service, but this in turn
increases the cost of running the service. In contrast, a streaming service that offers video-ondemand licenses at least some movies and shows that have been previously shown elsewhere,
which tends to lower the cost of licensing this content.
Differences in prices across streaming services that offer only video-on-demand tend to be fairly
small. This may be partially due to their relatively low prices, which make it difficult to lower
prices further. Thus, in addition to competing with prices, streaming services may seek to offer
exclusive access to popular movies and TV shows to attract new subscribers.
When the first streaming services launched in the late 2000s, they offered movies and shows that
had been previously shown elsewhere. For example, when Netflix launched its streaming service
in 2007, it offered about 1,000 television shows and movies, licensed from NBC Universal, Sony
Pictures, MGM, and others; it did not offer original content. 12 A few years later, some streaming
services started commissioning movies and shows from television or film studios. This made
streaming services less dependent on licensing agreements with television networks and allowed
them to offer original programming, which increased the importance of content. 13 In 2013,
Netflix debuted its first original series, House of Cards, and became the first streaming service to
win a Television Academy Emmy Award.14 Original programs from other streaming services have
won television awards as well, such as Hulu’s Handmaid’s Tale.
15 In 2020, Netflix received 160
Emmy nominations, breaking the record for the greatest number of nominations of any network,
studio, or streaming platform.16 Nevertheless, streaming services continue to license previously broadcast movies and shows from television networks and film studios to complement their
original content.
Some streaming services, particularly those that offer live TV, advertise themselves as an
alternative to MVPDs. However, streaming services oftentimes rely on the same content creators
as television networks, such as sports leagues and television and movie studios.
17 Television
networks and movie theaters show a single program at a time, which can create incentives to
select the program with the greatest profit potential for each time slot. In contrast, streaming
services offer multiple programs for users to choose from. Thus, streaming services can feature
content that appeals to various groups of users rather than to the public at large. This may
increase competition for video content and provide new opportunities for content creators.18
Some companies that own studios and television networks offer their own video streaming
services. This can create incentives for these studios to license fewer shows and movies to other
streaming services, reserving popular content for their own streaming services instead. For
example, AT&T, which owns Warner Brothers Studio, stopped licensing certain shows—such as
Friends, The Wire, and The Sopranos—to streaming services owned by other companies, offering
these shows exclusively on its streaming services HBO and HBO Max.19 Similarly, Comcast is
offering some of its NBCUniversal shows, such as The Office and Parks and Recreation,
exclusively on its streaming service Peacock;20 Walt Disney Co. announced that The Simpsons
would be offered exclusively on its streaming service Disney+.21
Streaming services operated by companies that also own film studios and television networks
may have an advantage over their competitors. A company may provide its streaming service
exclusive access to its studio’s programming, or may choose to license the programming to its
streaming competitors for a fee. This means some streaming services are able to restrict access to
content, which could make it more difficult for new competitors to enter the video streaming
market. Entrants may need to devote significant resources to produce or license content before
offering their streaming services to customers. |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | What is the purpose of the drug Metoprolol and what are some of the potential side effects of its usage? Make your response at least 150 words. | Metoprolol is used alone or in combination with other medications to treat high blood pressure. It also is used to treat chronic (long-term) angina (chest pain). Metoprolol is also used to improve survival after a heart attack. Metoprolol also is used in combination with other medications to treat heart failure. Metoprolol is in a class of medications called beta blockers. It works by relaxing blood vessels and slowing heart rate to improve blood flow and decrease blood pressure.
High blood pressure is a common condition and when not treated, can cause damage to the brain, heart, blood vessels, kidneys and other parts of the body. Damage to these organs may cause heart disease, a heart attack, heart failure, stroke, kidney failure, loss of vision, and other problems. In addition to taking medication, making lifestyle changes will also help to control your blood pressure. These changes include eating a diet that is low in fat and salt, maintaining a healthy weight, exercising at least 30 minutes most days, not smoking, and using alcohol in moderation.
How should this medicine be used?
Metoprolol comes as a tablet, an extended-release (long-acting) tablet, and an extended-release capsule to take by mouth. The regular tablet is usually taken once or twice a day with meals or immediately after meals. The extended-release tablet and extended-release capsule are usually taken once a day. To help you remember to take metoprolol, take it around the same time(s) every day. Follow the directions on your prescription label carefully, and ask your doctor or pharmacist to explain any part you do not understand. Take metoprolol exactly as directed. Do not take more or less of it or take it more often than prescribed by your doctor.
The extended-release tablet may be split. Swallow the whole or half extended-release tablets whole; do not chew or crush them.
Swallow the extended-release capsules whole; do not split, chew, or crush them. If you are unable to swallow the capsules, you may open the capsule and sprinkle the contents over a spoonful of soft food, such as applesauce, pudding, or yogurt and swallow the mixture immediately. Do not swallow the mixture more than 60 minutes after you sprinkle the contents of the capsule.
Your doctor may start you on a low dose of metoprolol and gradually increase your dose.
Metoprolol helps to control your condition but will not cure it. Continue to take metoprolol even if you feel well. Do not stop taking metoprolol without talking to your doctor. If you suddenly stop taking metoprolol you may experience serious heart problems such as severe chest pain, a heart attack, or an irregular heartbeat. Your doctor will probably want to decrease your dose gradually over 1 to 2 weeks and will monitor you closely.
Other uses for this medicine
Metoprolol is also used sometimes to treat certain types of irregular heartbeats. Talk to your doctor about the possible risks of using this medication for your condition.
This medication may be prescribed for other uses; ask your doctor or pharmacist for more information.
What special precautions should I follow?
Before taking metoprolol,
tell your doctor and pharmacist if you are allergic to metoprolol, any other medications, or any of the ingredients in metoprolol tablets, extended-release tablets, or extended-release capsules. Ask your pharmacist for a list of the ingredients.
tell your doctor and pharmacist what prescription and nonprescription medications, vitamins, nutritional supplements, and herbal products you are taking or plan to take. Your doctor may need to change the doses of your medications or monitor you carefully for side effects.
tell your doctor if you have a slow or irregular heartbeat or heart failure. Your doctor may tell you not to take metoprolol.
tell your doctor if you have or have ever had asthma or other lung diseases; problems with blood circulation; pheochromocytoma (a tumor that develops on a gland near the kidneys and may cause high blood pressure and fast heartbeat); heart or liver disease;diabetes; or hyperthyroidism (an overactive thyroid gland). Also tell your doctor if you have ever had a serious allergic reaction to a food or any other substance.
tell your doctor if you are pregnant, plan to become pregnant, or are breastfeeding. If you become pregnant while taking metoprolol, call your doctor.
if you are having surgery, including dental surgery, tell the doctor or dentist that you are taking metoprolol.
you should know that metoprolol may make you drowsy. Do not drive a car or operate machinery until you know how this medication affects you.
do not drink any alcoholic drinks or take any prescription or nonprescription medications that contain alcohol if you are taking metoprolol extended-release capsules. Ask your doctor or pharmacist if you do not know if a medication that you plan to take contains alcohol.
you should know that metoprolol may increase the risk of hypoglycemia (low blood sugar) and prevent the warning signs and symptoms that would tell you that your blood sugar is low. Let your doctor know if you are unable to eat or drink normally or are vomiting while you are taking metoprolol. You should know the symptoms of low blood sugar and what to do if you have these symptoms.
you should know that if you have allergic reactions to different substances, your reactions may be worse while you are using metoprolol, and your allergic reactions may not respond to the usual doses of injectable epinephrine.
What special dietary instructions should I follow?
IUnless your doctor tells you otherwise, continue your normal diet.
What should I do if I forget a dose?
Skip the missed dose and continue your regular dosing schedule. Do not take a double dose to make up for a missed one.
What side effects can this medication cause?
Metoprolol may cause side effects. Tell your doctor if any of these symptoms are severe or do not go away:
dizziness or lightheadedness
tiredness
depression
diarrhea
nausea
dry mouth
stomach pain
vomiting
gas or bloating
heartburn
runny nose
Some side effects can be serious. The following symptoms are uncommon, but if you experience any of them, call your doctor immediately:
shortness of breath or difficulty breathing
wheezing
weight gain
fainting
rapid, pounding, or irregular heartbeat
Metoprolol may cause other side effects. Call your doctor if you have any unusual problems while taking this medication.
If you experience a serious side effect, you or your doctor may send a report to the Food and Drug Administration's (FDA) MedWatch Adverse Event Reporting program online (https://www.fda.gov/Safety/MedWatch) or by phone (1-800-332-1088). | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
What is the purpose of the drug Metoprolol and what are some of the potential side effects of its usage? Make your response at least 150 words.
{passage 0}
==========
Metoprolol is used alone or in combination with other medications to treat high blood pressure. It also is used to treat chronic (long-term) angina (chest pain). Metoprolol is also used to improve survival after a heart attack. Metoprolol also is used in combination with other medications to treat heart failure. Metoprolol is in a class of medications called beta blockers. It works by relaxing blood vessels and slowing heart rate to improve blood flow and decrease blood pressure.
High blood pressure is a common condition and when not treated, can cause damage to the brain, heart, blood vessels, kidneys and other parts of the body. Damage to these organs may cause heart disease, a heart attack, heart failure, stroke, kidney failure, loss of vision, and other problems. In addition to taking medication, making lifestyle changes will also help to control your blood pressure. These changes include eating a diet that is low in fat and salt, maintaining a healthy weight, exercising at least 30 minutes most days, not smoking, and using alcohol in moderation.
How should this medicine be used?
Metoprolol comes as a tablet, an extended-release (long-acting) tablet, and an extended-release capsule to take by mouth. The regular tablet is usually taken once or twice a day with meals or immediately after meals. The extended-release tablet and extended-release capsule are usually taken once a day. To help you remember to take metoprolol, take it around the same time(s) every day. Follow the directions on your prescription label carefully, and ask your doctor or pharmacist to explain any part you do not understand. Take metoprolol exactly as directed. Do not take more or less of it or take it more often than prescribed by your doctor.
The extended-release tablet may be split. Swallow the whole or half extended-release tablets whole; do not chew or crush them.
Swallow the extended-release capsules whole; do not split, chew, or crush them. If you are unable to swallow the capsules, you may open the capsule and sprinkle the contents over a spoonful of soft food, such as applesauce, pudding, or yogurt and swallow the mixture immediately. Do not swallow the mixture more than 60 minutes after you sprinkle the contents of the capsule.
Your doctor may start you on a low dose of metoprolol and gradually increase your dose.
Metoprolol helps to control your condition but will not cure it. Continue to take metoprolol even if you feel well. Do not stop taking metoprolol without talking to your doctor. If you suddenly stop taking metoprolol you may experience serious heart problems such as severe chest pain, a heart attack, or an irregular heartbeat. Your doctor will probably want to decrease your dose gradually over 1 to 2 weeks and will monitor you closely.
Other uses for this medicine
Metoprolol is also used sometimes to treat certain types of irregular heartbeats. Talk to your doctor about the possible risks of using this medication for your condition.
This medication may be prescribed for other uses; ask your doctor or pharmacist for more information.
What special precautions should I follow?
Before taking metoprolol,
tell your doctor and pharmacist if you are allergic to metoprolol, any other medications, or any of the ingredients in metoprolol tablets, extended-release tablets, or extended-release capsules. Ask your pharmacist for a list of the ingredients.
tell your doctor and pharmacist what prescription and nonprescription medications, vitamins, nutritional supplements, and herbal products you are taking or plan to take. Your doctor may need to change the doses of your medications or monitor you carefully for side effects.
tell your doctor if you have a slow or irregular heartbeat or heart failure. Your doctor may tell you not to take metoprolol.
tell your doctor if you have or have ever had asthma or other lung diseases; problems with blood circulation; pheochromocytoma (a tumor that develops on a gland near the kidneys and may cause high blood pressure and fast heartbeat); heart or liver disease;diabetes; or hyperthyroidism (an overactive thyroid gland). Also tell your doctor if you have ever had a serious allergic reaction to a food or any other substance.
tell your doctor if you are pregnant, plan to become pregnant, or are breastfeeding. If you become pregnant while taking metoprolol, call your doctor.
if you are having surgery, including dental surgery, tell the doctor or dentist that you are taking metoprolol.
you should know that metoprolol may make you drowsy. Do not drive a car or operate machinery until you know how this medication affects you.
do not drink any alcoholic drinks or take any prescription or nonprescription medications that contain alcohol if you are taking metoprolol extended-release capsules. Ask your doctor or pharmacist if you do not know if a medication that you plan to take contains alcohol.
you should know that metoprolol may increase the risk of hypoglycemia (low blood sugar) and prevent the warning signs and symptoms that would tell you that your blood sugar is low. Let your doctor know if you are unable to eat or drink normally or are vomiting while you are taking metoprolol. You should know the symptoms of low blood sugar and what to do if you have these symptoms.
you should know that if you have allergic reactions to different substances, your reactions may be worse while you are using metoprolol, and your allergic reactions may not respond to the usual doses of injectable epinephrine.
What special dietary instructions should I follow?
IUnless your doctor tells you otherwise, continue your normal diet.
What should I do if I forget a dose?
Skip the missed dose and continue your regular dosing schedule. Do not take a double dose to make up for a missed one.
What side effects can this medication cause?
Metoprolol may cause side effects. Tell your doctor if any of these symptoms are severe or do not go away:
dizziness or lightheadedness
tiredness
depression
diarrhea
nausea
dry mouth
stomach pain
vomiting
gas or bloating
heartburn
runny nose
Some side effects can be serious. The following symptoms are uncommon, but if you experience any of them, call your doctor immediately:
shortness of breath or difficulty breathing
wheezing
weight gain
fainting
rapid, pounding, or irregular heartbeat
Metoprolol may cause other side effects. Call your doctor if you have any unusual problems while taking this medication.
If you experience a serious side effect, you or your doctor may send a report to the Food and Drug Administration's (FDA) MedWatch Adverse Event Reporting program online (https://www.fda.gov/Safety/MedWatch) or by phone (1-800-332-1088).
https://medlineplus.gov/druginfo/meds/a682864.html |
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | Simplify this passage about gene therapy for sickle cell. Explain what the therapy is and how it works. Also explain the type of sickle cell the patient had. Use bullets an headers so it can be easier to read< | Sickle cell disease results from a homozygous missense mutation in the β-globin gene that causes polymerization of hemoglobin S. Gene therapy for patients with this disorder is complicated by the complex cellular abnormalities and challenges in achieving effective, persistent inhibition of polymerization of hemoglobin S. We describe our first patient treated with lentiviral vector–mediated addition of an antisickling β-globin gene into autologous hematopoietic stem cells. Adverse events were consistent with busulfan conditioning. Fifteen months after treatment, the level of therapeutic antisickling β-globin remained high (approximately 50% of β-like–globin chains) without recurrence of sickle crises and with correction of the biologic hallmarks of the disease. (Funded by Bluebird Bio and others; HGB-205 ClinicalTrials.gov number, NCT02151526.)
Sickle cell disease is among the most prevalent inherited monogenic disorders. Approximately 90,000 people in the United States have sickle cell disease, and worldwide more than 275,000 infants are born with the disease annually.1,2 Sickle cell disease was the first disease for which the molecular basis was identified: a single amino acid substitution in “adult” βA-globin (Glu6Val) stemming from a single base substitution (A→T) in the first exon of the human βA-globin gene (HBB) was discovered in 1956.3 Sickle hemoglobin (HbS) polymerizes on deoxygenation, reducing the deformability of red cells. Patients have intensely painful vaso-occlusive crises, leading to irreversible organ damage, poor quality of life, and reduced life expectancy. Hydroxyurea, a cytotoxic agent that is capable of boosting fetal hemoglobin levels in some patients, is the only disease-modifying therapy approved for sickle cell disease.4
Allogeneic hematopoietic stem-cell transplantation currently offers the only curative option for patients with severe sickle cell disease.5,6 However, fewer than 18% of patients have access to a matched sibling donor.7,8 Therapeutic ex vivo gene transfer into autologous hematopoietic stem cells, referred to here as gene therapy, may provide a long-term and potentially curative treatment for sickle cell disease.9
We previously reported proof of effective, sustained gene therapy in mouse models of sickle cell disease by lentiviral transfer of a modified HBB encoding an antisickling variant (βA87Thr:Gln [βA-T87Q]).10,11 Here we report the results for a patient who received lentiviral gene therapy in the HGB-205 clinical study and who had complete clinical remission with correction of hemolysis and biologic hallmarks of the disease.
Case Report
A boy with the βS/βS genotype, a single 3.7-kb α-globin gene deletion, and no glucose 6-phosphate dehydrogenase deficiency received a diagnosis of sickle cell disease at birth and was followed at the Reference Centre for Sickle Cell Disease of Necker Children’s Hospital in Paris. He had a history of numerous vaso-occlusive crises, two episodes of the acute chest syndrome, and bilateral hip osteonecrosis. He had undergone cholecystectomy and splenectomy. During screening, a cerebral hypodensity without characteristics of cerebral vasculopathy was detected.
Because hydroxyurea therapy administered when the boy was between 2 and 9 years of age did not reduce his symptoms significantly, a prophylactic red-cell transfusion program was initiated in 2010, including iron chelation with deferasirox (at a dose of 17 mg per kilogram of body weight per day). He had had an average of 1.6 sickle cell disease–related events annually in the 9 years before transfusions were initiated.
In May 2014, he was enrolled in our clinical study. His verbal assent and his mother’s written informed consent were obtained. In October 2014, when the patient was 13 years of age, he received an infusion of the drug product LentiGlobin BB305.
Methods
Study Oversight
The study protocol, which is available with the full text of this article at NEJM.org, was designed by the last two authors and Bluebird Bio, the study sponsor. The protocol was reviewed by the French Comité de Protection des Personnes and relevant institutional ethics committees. Clinical data were collected by the first author, and laboratory data were generated by the sponsor, the last author, and other authors. The authors had access to all data, and data analysis was performed by them. The first author and one author employed by the sponsor wrote the first draft of the manuscript, which was substantively revised by the last two authors and further edited and approved by all the authors with writing assistance provided by an employee of the sponsor. The authors vouch for the accuracy and completeness of the data and adherence to the protocol.
Antisickling Gene Therapy Vector
The structure of the LentiGlobin BB305 vector has been previously described (see Fig. S1 in the Supplementary Appendix, available at NEJM.org).12 This self-inactivating lentiviral vector encodes the human HBB variant βA-T87Q. In addition to inhibiting HbS polymerization, the T87Q substitution allows for the β-globin chain of adult hemoglobin (HbA)T87Q to be differentially quantified by means of reverse-phase high-performance liquid chromatography.12
Gene Transfer and Transplantation Procedures
Bone marrow was obtained twice from the patient to collect sufficient stem cells for gene transfer and backup (6.2×108 per kilogram and 5.4×108 per kilogram, respectively, of total nucleated cells obtained). Both procedures were preceded by exchange transfusion, and bone marrow was obtained without clinical sequelae. Anemia was the only grade 3 adverse event reported during these procedures. Bone marrow–enriched CD34+ cells were transduced with LentiGlobin BB305 vector (see the Methods section in the Supplementary Appendix).13 The mean vector copy numbers for the two batches of transduced cells were 1.0 and 1.2 copies per cell.
The patient underwent myeloablation with intravenous busulfan (see the Methods section in the Supplementary Appendix). The total busulfan area under the curve achieved was 19,363 μmol per minute. After a 2-day washout period, transduced CD34+ cells (5.6×106 CD34+ cells per kilogram) were infused. Red-cell transfusions were to be continued after transplantation until a large proportion of HbAT87Q (25 to 30% of total hemoglobin) was detected.
The patient was followed for engraftment; toxic effects (graded according to the National Cancer Institute Common Terminology Criteria for Adverse Events, version 4.03); vector copy number in total nucleated blood cells and in different lineages; quantification of HbAT87Q, HbS, and fetal hemoglobin levels by means of high-performance liquid chromatography; DNA integration-site mapping by linear amplification–mediated polymerase chain reaction in nucleated blood cells; and replication-competent lentivirus analysis by p24 antibody enzyme-linked immunosorbent assay. Red-cell analyses were performed at month 12 (see the Methods section in the Supplementary Appendix).
Results
Engraftment and Gene Expression
Neutrophil engraftment was achieved on day 38 after transplantation, and platelet engraftment was achieved on day 91 after transplantation. Figure 1A shows the trajectory of vector copy numbers and Figure 1B shows production of HbAT87Q. Gene marking increased progressively in whole blood, CD15 cells, B cells, and monocytes (Fig. S2 in the Supplementary Appendix), stabilizing 3 months after transplantation. Increases in levels of vector-bearing T cells were more gradual.
Figure 1
Engraftment with Transduced Cells and Therapeutic Gene Expression in the Patient.
HbAT87Q levels also increased steadily (Figure 1B) and red-cell transfusions were discontinued, with the last transfusion on day 88. Levels of HbAT87Q reached 5.5 g per deciliter (46%) at month 9 and continued to increase to 5.7 g per deciliter (48%) at month 15, with a reciprocal decrease in HbS levels to 5.5 g per deciliter (46%) at month 9 and 5.8 g per deciliter (49%) at month 15. Total hemoglobin levels were stable between 10.6 and 12.0 g per deciliter after post-transplantation month 6. Fetal hemoglobin levels remained below 1.0 g per deciliter. | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
Simplify this passage about gene therapy for sickle cell. Explain what the therapy is and how it works. Also explain the type of sickle cell the patient had. Use bullets an headers so it can be easier to read<
Sickle cell disease results from a homozygous missense mutation in the β-globin gene that causes polymerization of hemoglobin S. Gene therapy for patients with this disorder is complicated by the complex cellular abnormalities and challenges in achieving effective, persistent inhibition of polymerization of hemoglobin S. We describe our first patient treated with lentiviral vector–mediated addition of an antisickling β-globin gene into autologous hematopoietic stem cells. Adverse events were consistent with busulfan conditioning. Fifteen months after treatment, the level of therapeutic antisickling β-globin remained high (approximately 50% of β-like–globin chains) without recurrence of sickle crises and with correction of the biologic hallmarks of the disease. (Funded by Bluebird Bio and others; HGB-205 ClinicalTrials.gov number, NCT02151526.)
Sickle cell disease is among the most prevalent inherited monogenic disorders. Approximately 90,000 people in the United States have sickle cell disease, and worldwide more than 275,000 infants are born with the disease annually.1,2 Sickle cell disease was the first disease for which the molecular basis was identified: a single amino acid substitution in “adult” βA-globin (Glu6Val) stemming from a single base substitution (A→T) in the first exon of the human βA-globin gene (HBB) was discovered in 1956.3 Sickle hemoglobin (HbS) polymerizes on deoxygenation, reducing the deformability of red cells. Patients have intensely painful vaso-occlusive crises, leading to irreversible organ damage, poor quality of life, and reduced life expectancy. Hydroxyurea, a cytotoxic agent that is capable of boosting fetal hemoglobin levels in some patients, is the only disease-modifying therapy approved for sickle cell disease.4
Allogeneic hematopoietic stem-cell transplantation currently offers the only curative option for patients with severe sickle cell disease.5,6 However, fewer than 18% of patients have access to a matched sibling donor.7,8 Therapeutic ex vivo gene transfer into autologous hematopoietic stem cells, referred to here as gene therapy, may provide a long-term and potentially curative treatment for sickle cell disease.9
We previously reported proof of effective, sustained gene therapy in mouse models of sickle cell disease by lentiviral transfer of a modified HBB encoding an antisickling variant (βA87Thr:Gln [βA-T87Q]).10,11 Here we report the results for a patient who received lentiviral gene therapy in the HGB-205 clinical study and who had complete clinical remission with correction of hemolysis and biologic hallmarks of the disease.
Case Report
A boy with the βS/βS genotype, a single 3.7-kb α-globin gene deletion, and no glucose 6-phosphate dehydrogenase deficiency received a diagnosis of sickle cell disease at birth and was followed at the Reference Centre for Sickle Cell Disease of Necker Children’s Hospital in Paris. He had a history of numerous vaso-occlusive crises, two episodes of the acute chest syndrome, and bilateral hip osteonecrosis. He had undergone cholecystectomy and splenectomy. During screening, a cerebral hypodensity without characteristics of cerebral vasculopathy was detected.
Because hydroxyurea therapy administered when the boy was between 2 and 9 years of age did not reduce his symptoms significantly, a prophylactic red-cell transfusion program was initiated in 2010, including iron chelation with deferasirox (at a dose of 17 mg per kilogram of body weight per day). He had had an average of 1.6 sickle cell disease–related events annually in the 9 years before transfusions were initiated.
In May 2014, he was enrolled in our clinical study. His verbal assent and his mother’s written informed consent were obtained. In October 2014, when the patient was 13 years of age, he received an infusion of the drug product LentiGlobin BB305.
Methods
Study Oversight
The study protocol, which is available with the full text of this article at NEJM.org, was designed by the last two authors and Bluebird Bio, the study sponsor. The protocol was reviewed by the French Comité de Protection des Personnes and relevant institutional ethics committees. Clinical data were collected by the first author, and laboratory data were generated by the sponsor, the last author, and other authors. The authors had access to all data, and data analysis was performed by them. The first author and one author employed by the sponsor wrote the first draft of the manuscript, which was substantively revised by the last two authors and further edited and approved by all the authors with writing assistance provided by an employee of the sponsor. The authors vouch for the accuracy and completeness of the data and adherence to the protocol.
Antisickling Gene Therapy Vector
The structure of the LentiGlobin BB305 vector has been previously described (see Fig. S1 in the Supplementary Appendix, available at NEJM.org).12 This self-inactivating lentiviral vector encodes the human HBB variant βA-T87Q. In addition to inhibiting HbS polymerization, the T87Q substitution allows for the β-globin chain of adult hemoglobin (HbA)T87Q to be differentially quantified by means of reverse-phase high-performance liquid chromatography.12
Gene Transfer and Transplantation Procedures
Bone marrow was obtained twice from the patient to collect sufficient stem cells for gene transfer and backup (6.2×108 per kilogram and 5.4×108 per kilogram, respectively, of total nucleated cells obtained). Both procedures were preceded by exchange transfusion, and bone marrow was obtained without clinical sequelae. Anemia was the only grade 3 adverse event reported during these procedures. Bone marrow–enriched CD34+ cells were transduced with LentiGlobin BB305 vector (see the Methods section in the Supplementary Appendix).13 The mean vector copy numbers for the two batches of transduced cells were 1.0 and 1.2 copies per cell.
The patient underwent myeloablation with intravenous busulfan (see the Methods section in the Supplementary Appendix). The total busulfan area under the curve achieved was 19,363 μmol per minute. After a 2-day washout period, transduced CD34+ cells (5.6×106 CD34+ cells per kilogram) were infused. Red-cell transfusions were to be continued after transplantation until a large proportion of HbAT87Q (25 to 30% of total hemoglobin) was detected.
The patient was followed for engraftment; toxic effects (graded according to the National Cancer Institute Common Terminology Criteria for Adverse Events, version 4.03); vector copy number in total nucleated blood cells and in different lineages; quantification of HbAT87Q, HbS, and fetal hemoglobin levels by means of high-performance liquid chromatography; DNA integration-site mapping by linear amplification–mediated polymerase chain reaction in nucleated blood cells; and replication-competent lentivirus analysis by p24 antibody enzyme-linked immunosorbent assay. Red-cell analyses were performed at month 12 (see the Methods section in the Supplementary Appendix).
Results
Engraftment and Gene Expression
Neutrophil engraftment was achieved on day 38 after transplantation, and platelet engraftment was achieved on day 91 after transplantation. Figure 1A shows the trajectory of vector copy numbers and Figure 1B shows production of HbAT87Q. Gene marking increased progressively in whole blood, CD15 cells, B cells, and monocytes (Fig. S2 in the Supplementary Appendix), stabilizing 3 months after transplantation. Increases in levels of vector-bearing T cells were more gradual.
Figure 1
Engraftment with Transduced Cells and Therapeutic Gene Expression in the Patient.
HbAT87Q levels also increased steadily (Figure 1B) and red-cell transfusions were discontinued, with the last transfusion on day 88. Levels of HbAT87Q reached 5.5 g per deciliter (46%) at month 9 and continued to increase to 5.7 g per deciliter (48%) at month 15, with a reciprocal decrease in HbS levels to 5.5 g per deciliter (46%) at month 9 and 5.8 g per deciliter (49%) at month 15. Total hemoglobin levels were stable between 10.6 and 12.0 g per deciliter after post-transplantation month 6. Fetal hemoglobin levels remained below 1.0 g per deciliter.
https://www.nejm.org/doi/full/10.1056/NEJMoa1609677#:~:text=HbAT87Q%20levels%20also%20increased,below%201.0%20g%20per%20deciliter. |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | I have been considering liposuction, but my sister said a tummy tuck or cool sculpting is best. I am familiar with tummy tucks, but unfamiliar with cool sculpting.
In 150 words or less, please tell me how the three options compare regarding the procedure and recovery time. | Coolsculpting, Liposuction or a Tummy Tuck: How to Choose the Best Procedure for You
If you’re considering body contouring but aren’t sure which procedure is right for you (CoolSculpting vs liposuction vs tummy tuck), consider the following:
Your Anatomy: Each of these procedures treats very different issues, and you may not be a candidate for all three. Do you have excess fat and no extra skin? Or just loose skin but no excess fat? Or do you have some of both? Are your stomach muscles separated? Generally speaking, tummy tucks are best for patients with excess skin or muscles that need tightening, while liposuction and CoolSculpting are best to treat stubborn areas of fat that just won’t go away.
Your Timeline: Are you looking for immediate results or would you rather see your results come gradually over time? Choosing between CoolSculpting vs liposuction means you’ll need to consider what kind of time you have to recover. Liposuction results will come immediately but it’s an intensive treatment that requires rest and recovery in the weeks following surgery. CoolSculpting results usually require multiple treatment sessions and appear more gradually, but require little to no downtime.
Your Expectations: What kind of result will you be satisfied with? Do you want your stomach to be as flat, tight and smooth as possible, or would you be happy with mild improvement?
Your Health: Are you healthy enough to have a surgical procedure? If not, CoolSculpting may be your only non-invasive option.
Weighing Your Options: What’s the Difference in Procedures?
CoolSculpting: Best non-surgical option for excess fat removal
If you have small pockets of fat that just don’t respond to diet and exercise, CoolSculpting may be a great option for you. CoolSculpting is a revolutionary, non-surgical body contouring procedure that removes unwanted bulges by freezing the fat until it breaks down. This allows your body to eliminate the fat naturally while your skin, muscle, and other tissues stay unharmed and healthy.
This procedure is FDA-cleared and comfortable, requiring no downtime. Patients may often decide to get CoolSculpting done in the abdominal area, and, oftentimes, in combination with other parts of the body, including hips, flanks, back, thighs, chin, legs, and more.
Unlike liposuction or a tummy tuck procedures, where patients must take time to rest and recover, with CoolSculpting, patients immediately return to their normal activities, including strenuous exercise. The best results may require multiple treatments, but changes are usually noticeable just three weeks after treatment, with final results seen after one to three months.
Liposuction: Best for effective (but less invasive) fat removal
Liposuction is highly effective for safely removing stubborn areas of unwanted fat. It is an ideal solution for people who have good skin, good muscle tone, and no excess, loose skin.
Unlike a non-surgical CoolSculpting treatment, liposuction is a surgical procedure requiring a very small incision to access the targeted area. The surgeon will insert a thin cannula which is used to clean the area with saline and anesthetic solutions, while loosening the fat cells. The fat cells are then suctioned away with a surgical vacuum. The procedure is performed on an outpatient basis with general anesthesia. It takes between one to five hours, depending upon the size of the treatment area.
In addition to slimming the abdomen, liposuction can be used in many other places of the body, including the sides (love handles), arms, chin, legs, and bottom. Liposuction is far less invasive than a tummy tuck, but it only removes fat. Liposuction will not eliminate excess skin or stretch marks, and it will not tighten loose abdominal muscles. While both liposuction and CoolSculpting are used to remove unwanted fat, the results can be different. Liposuction offers precise contouring with immediate results in a single procedure vs CoolSculpting which requires no surgery or downtime but a longer wait time for results.
Tummy Tuck: Best for removing excess skin and fat
A tummy tuck, also known as an abdominoplasty, addresses the unwanted fat in a person’s abdomen, sagging skin, loose muscles and stretch marks associated with life events such as pregnancy, drastic weight loss, and age. The biggest difference between a patient who qualifies for a tummy tuck vs liposuction procedure is the presence of this excess skin as a result of losing fat quickly.
A tummy tuck procedure starts with an incision across the lower abdomen, allowing the surgeon to remove excess skin and tighten slack or loose muscles. The incision is strategically placed to be as inconspicuous as possible, so it can be hidden by underwear and bathing suits, should scarring occur. The length of the incision is determined by the patient’s anatomy and the level of correction needed to achieve the desired results.
Unwanted fat, skin and stretch marks are removed and weak, protruding abdominal muscles are repaired – leaving a tight, flat, smooth tummy! A tummy tuck takes two to five hours and is performed on an outpatient basis with general anesthesia. A tummy tuck is a more invasive procedure compared to liposuction because of all the tightening involved. The recovery time is also much longer for a tummy tuck vs a non-surgical procedure like CoolSculpting.
Combining a Tummy Tuck, Liposuction and CoolSculpting for Optimal Results
Ultimately, there is no one size fits all treatment for body contouring procedures. Each patient’s situation is going to be different and our surgeons may frequently recommend a combination of procedures in order to achieve the desired results.
Because of the differences in what each procedure does, liposuction is often performed during a tummy tuck procedure in order to remove the excess fat before tightening and smoothing out the skin. CoolSculpting can also be performed before or after a tummy tuck or liposuction procedure to enhance the body contouring results further. At Belcara Health, we develop customized treatment plans to meet the specific needs and goals of each patient. | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
I have been considering liposuction, but my sister said a tummy tuck or cool sculpting is best. I am familiar with tummy tucks, but unfamiliar with cool sculpting.
In 150 words or less, please tell me how the three options compare regarding the procedure and recovery time.
{passage 0}
==========
Coolsculpting, Liposuction or a Tummy Tuck: How to Choose the Best Procedure for You
If you’re considering body contouring but aren’t sure which procedure is right for you (CoolSculpting vs liposuction vs tummy tuck), consider the following:
Your Anatomy: Each of these procedures treats very different issues, and you may not be a candidate for all three. Do you have excess fat and no extra skin? Or just loose skin but no excess fat? Or do you have some of both? Are your stomach muscles separated? Generally speaking, tummy tucks are best for patients with excess skin or muscles that need tightening, while liposuction and CoolSculpting are best to treat stubborn areas of fat that just won’t go away.
Your Timeline: Are you looking for immediate results or would you rather see your results come gradually over time? Choosing between CoolSculpting vs liposuction means you’ll need to consider what kind of time you have to recover. Liposuction results will come immediately but it’s an intensive treatment that requires rest and recovery in the weeks following surgery. CoolSculpting results usually require multiple treatment sessions and appear more gradually, but require little to no downtime.
Your Expectations: What kind of result will you be satisfied with? Do you want your stomach to be as flat, tight and smooth as possible, or would you be happy with mild improvement?
Your Health: Are you healthy enough to have a surgical procedure? If not, CoolSculpting may be your only non-invasive option.
Weighing Your Options: What’s the Difference in Procedures?
CoolSculpting: Best non-surgical option for excess fat removal
If you have small pockets of fat that just don’t respond to diet and exercise, CoolSculpting may be a great option for you. CoolSculpting is a revolutionary, non-surgical body contouring procedure that removes unwanted bulges by freezing the fat until it breaks down. This allows your body to eliminate the fat naturally while your skin, muscle, and other tissues stay unharmed and healthy.
This procedure is FDA-cleared and comfortable, requiring no downtime. Patients may often decide to get CoolSculpting done in the abdominal area, and, oftentimes, in combination with other parts of the body, including hips, flanks, back, thighs, chin, legs, and more.
Unlike liposuction or a tummy tuck procedures, where patients must take time to rest and recover, with CoolSculpting, patients immediately return to their normal activities, including strenuous exercise. The best results may require multiple treatments, but changes are usually noticeable just three weeks after treatment, with final results seen after one to three months.
Liposuction: Best for effective (but less invasive) fat removal
Liposuction is highly effective for safely removing stubborn areas of unwanted fat. It is an ideal solution for people who have good skin, good muscle tone, and no excess, loose skin.
Unlike a non-surgical CoolSculpting treatment, liposuction is a surgical procedure requiring a very small incision to access the targeted area. The surgeon will insert a thin cannula which is used to clean the area with saline and anesthetic solutions, while loosening the fat cells. The fat cells are then suctioned away with a surgical vacuum. The procedure is performed on an outpatient basis with general anesthesia. It takes between one to five hours, depending upon the size of the treatment area.
In addition to slimming the abdomen, liposuction can be used in many other places of the body, including the sides (love handles), arms, chin, legs, and bottom. Liposuction is far less invasive than a tummy tuck, but it only removes fat. Liposuction will not eliminate excess skin or stretch marks, and it will not tighten loose abdominal muscles. While both liposuction and CoolSculpting are used to remove unwanted fat, the results can be different. Liposuction offers precise contouring with immediate results in a single procedure vs CoolSculpting which requires no surgery or downtime but a longer wait time for results.
Tummy Tuck: Best for removing excess skin and fat
A tummy tuck, also known as an abdominoplasty, addresses the unwanted fat in a person’s abdomen, sagging skin, loose muscles and stretch marks associated with life events such as pregnancy, drastic weight loss, and age. The biggest difference between a patient who qualifies for a tummy tuck vs liposuction procedure is the presence of this excess skin as a result of losing fat quickly.
A tummy tuck procedure starts with an incision across the lower abdomen, allowing the surgeon to remove excess skin and tighten slack or loose muscles. The incision is strategically placed to be as inconspicuous as possible, so it can be hidden by underwear and bathing suits, should scarring occur. The length of the incision is determined by the patient’s anatomy and the level of correction needed to achieve the desired results.
Unwanted fat, skin and stretch marks are removed and weak, protruding abdominal muscles are repaired – leaving a tight, flat, smooth tummy! A tummy tuck takes two to five hours and is performed on an outpatient basis with general anesthesia. A tummy tuck is a more invasive procedure compared to liposuction because of all the tightening involved. The recovery time is also much longer for a tummy tuck vs a non-surgical procedure like CoolSculpting.
Combining a Tummy Tuck, Liposuction and CoolSculpting for Optimal Results
Ultimately, there is no one size fits all treatment for body contouring procedures. Each patient’s situation is going to be different and our surgeons may frequently recommend a combination of procedures in order to achieve the desired results.
Because of the differences in what each procedure does, liposuction is often performed during a tummy tuck procedure in order to remove the excess fat before tightening and smoothing out the skin. CoolSculpting can also be performed before or after a tummy tuck or liposuction procedure to enhance the body contouring results further. At Belcara Health, we develop customized treatment plans to meet the specific needs and goals of each patient.
https://www.belcarahealth.com/blog/tummy-tuck-vs-lipo-one-best/ |
You are to answer the question below based only on the information in the provided text. Do not pull from prior or outside knowledge. Use bold section headings with informative bullet points nested within the headings. | What are some pros and cons of smart watches? | There are a plenty of great smart watches to choose from, ranging from the Android Wear army to the most recent Pebble watches, and the Apple Watch is also selling like hotcakes. With smart watches we can find apps in android watch, download and install applications, Keep an eye on the navigation. Smart watch stay on your hand and inform us what’s happening. We won’t appear tired in meetings [1][2]. 1. Benefits of Android wear smart watches 1.1 Slumber: Slumber just blacks off your screen while our watch is charging. If user is charging the watch overnight, this light will be glaring in user face, and on some watches the continuous display has even caused screen burn-in most Android Wear devices turn on to charge and display a screen indicating the current battery level (Figure 1) [3].
Smart watches are a type of wearable device that allows us to keep our hands free while also allowing us to use our smartphones away from our bodies. The smart watch help to determine the battery consumption of the user’s phone, and Wear power supply Stats can help in calculate the drainage of the phone battery.The app's watch counterpart gives you a barebones view of what's going on with your battery, but you'll want to open the app on your phone for the most information. You'll be able to see how much time you've spent staring at your screen and which apps you've used as shown in Figure 2 [4]–[6]. 1.3Smart Watches as a Calculator Smart watches used for calculus homework, having an accessible calculator are a good idea for calculating a quick sale percentage, tipping at a restaurant, or double-checking your math.A wearable smart watches worn on the wrist has comparable functionality and capabilities as a smartphone. Smart watches are designed to provide features such as accessing the internet, running mobile applications, making a call, communication via text or video, trying to check caller ID, accessing stock and climate updates, providing wellness monitoring capabilities, providing Gps location and position directions, and more, either on their own or when paired with a smart phone [3], [7], [8]. 1.4 Smart phone as a mobile phone finder Apps that help you find your phone are common, but since your watch is linked to your phone, being able to ring it right from your wrist is a great idea. As long as your phone is connected to your watch via Bluetooth, simply opening the app and tapping "Find!" on your watch will ring your phone. You can also use the app to set a notification that sounds on both your watch and phone when they are disconnected. That way, if you're about to leave the house without your phone but not your watch, you'll be warned before making a costly error
The smart watches comprise of E-INK display as the battery life of an E-INK display is 5 days. Interface Touch interfaces are more intuitive to use, and many people use a mixture of touch and buttons. People who work out will benefit greatly from smart watches. Sensors are built into these devices that compute how many calories were expended, Traveled distance of the user, Speed, Pulse rate of the user and location of the user through GPS. Elderly falls are one of the most difficult issues that public health systems must deal with. According to World Health Organization (WHO) statistics, falls are the second largest cause of accidental injury mortality after road traffic accidents. Adults over the age of 65 are by far the most vulnerable to this problem, since falls may have a significant effect on their well-being and self-sufficiency. There are a variety of commercial wearables available now that are especially intended to detect falls (see, for example, the reviews presented ref. in for an analysis of the most popular products). These off-the-shelf gadgets, which are generally marketed as a pendant or bracelet, usually include a help button to summon assistance (a useless function if the patient remains unconscious after an accident). These alerting systems are primarily designed for in-home monitoring through specialized base stations connected to a landline. A monthly charge is needed to offer cell phone service when the customer wants on-the-go (ubiquitous) tracking, in addition to the cost of the detector and (in certain instances) the requirement for long-term contracts. Furthermore, in nearly all instances, the manufacturers do not provide information regarding the detection method used or how the detector was validated. As a result, the real efficiency of these systems for detecting falls has not been benchmarked (particularly when they are applied to the target population—the elderly)
The major issue with a smartwatch-based FDS is that analyzing wrist motions may lead to overestimates, or an excess of false alarms produced by the jerky activity of the arms and hands, which is not necessarily indicative of the rest of the body's mobility. As a result of the compensatory movements of the hands, the wrist exhibits a completely different mobility pattern during the fall when compared to measurements captured by other body positions, fall-related accelerometry signals may be misinterpreted more frequently as those originated by other ADLs and vice versa when the inertial sensor is placed on the wrist.
However, most smartwatches have significant battery and computational power limitations. In fact, autonomy, along with tiny displays, has long been seen as two of the most significant obstacles to wristwatch adoption in health monitoring applications targeted for the elderly. The amount of sensors and sampling rates used in a smartwatch have a direct relationship with battery consumption. As a result, the main limiting issue for the deployment and acceptance of apps that need continuous signal monitoring is the battery capacity (which is typically much lower than that of smartphones). Most activity recognition systems would be jeopardized if the battery autonomy was less than 24 hours, since movement monitoring would have to be stopped before sleep to recharge the batteries. An extra fall detection (constantly running) program may have a significant effect on the battery life. In fact, according to a recent research based on questionnaires completed by respondents in a real-world testing of a worn fall sensor, consumers prefer devices that can operate for at least 6 months before needing to charge a battery.
We examine commercially accessible Smart Watches, where adoption is still in its early stages. With the Smart watches accessing of the social network become faster. Smart watches reduce the number of times and effort of the user in pulling out phone. With the smart watches the calls and alerts are less likely to be missed. The features of the camera, Data may be synchronised with smart phones. Touch interface dominant makes easier to navigate, many use the mixture of touch and buttons are in the smart watches.[1], [10], [11]
| You are to answer the question below based only on the information in the provided text. Do not pull from prior or outside knowledge. Use bold section headings with informative bullet points nested within the headings.
Question: What are some pros and cons of smart watches?
There are a plenty of great smart watches to choose from, ranging from the Android Wear army to the most recent Pebble watches, and the Apple Watch is also selling like hotcakes. With smart watches we can find apps in android watch, download and install applications, Keep an eye on the navigation. Smart watch stay on your hand and inform us what’s happening. We won’t appear tired in meetings [1][2]. 1. Benefits of Android wear smart watches 1.1 Slumber: Slumber just blacks off your screen while our watch is charging. If user is charging the watch overnight, this light will be glaring in user face, and on some watches the continuous display has even caused screen burn-in most Android Wear devices turn on to charge and display a screen indicating the current battery level (Figure 1) [3].
Smart watches are a type of wearable device that allows us to keep our hands free while also allowing us to use our smartphones away from our bodies. The smart watch help to determine the battery consumption of the user’s phone, and Wear power supply Stats can help in calculate the drainage of the phone battery.The app's watch counterpart gives you a barebones view of what's going on with your battery, but you'll want to open the app on your phone for the most information. You'll be able to see how much time you've spent staring at your screen and which apps you've used as shown in Figure 2 [4]–[6]. 1.3Smart Watches as a Calculator Smart watches used for calculus homework, having an accessible calculator are a good idea for calculating a quick sale percentage, tipping at a restaurant, or double-checking your math.A wearable smart watches worn on the wrist has comparable functionality and capabilities as a smartphone. Smart watches are designed to provide features such as accessing the internet, running mobile applications, making a call, communication via text or video, trying to check caller ID, accessing stock and climate updates, providing wellness monitoring capabilities, providing Gps location and position directions, and more, either on their own or when paired with a smart phone [3], [7], [8]. 1.4 Smart phone as a mobile phone finder Apps that help you find your phone are common, but since your watch is linked to your phone, being able to ring it right from your wrist is a great idea. As long as your phone is connected to your watch via Bluetooth, simply opening the app and tapping "Find!" on your watch will ring your phone. You can also use the app to set a notification that sounds on both your watch and phone when they are disconnected. That way, if you're about to leave the house without your phone but not your watch, you'll be warned before making a costly error
The smart watches comprise of E-INK display as the battery life of an E-INK display is 5 days. Interface Touch interfaces are more intuitive to use, and many people use a mixture of touch and buttons. People who work out will benefit greatly from smart watches. Sensors are built into these devices that compute how many calories were expended, Traveled distance of the user, Speed, Pulse rate of the user and location of the user through GPS. Elderly falls are one of the most difficult issues that public health systems must deal with. According to World Health Organization (WHO) statistics, falls are the second largest cause of accidental injury mortality after road traffic accidents. Adults over the age of 65 are by far the most vulnerable to this problem, since falls may have a significant effect on their well-being and self-sufficiency. There are a variety of commercial wearables available now that are especially intended to detect falls (see, for example, the reviews presented ref. in for an analysis of the most popular products). These off-the-shelf gadgets, which are generally marketed as a pendant or bracelet, usually include a help button to summon assistance (a useless function if the patient remains unconscious after an accident). These alerting systems are primarily designed for in-home monitoring through specialized base stations connected to a landline. A monthly charge is needed to offer cell phone service when the customer wants on-the-go (ubiquitous) tracking, in addition to the cost of the detector and (in certain instances) the requirement for long-term contracts. Furthermore, in nearly all instances, the manufacturers do not provide information regarding the detection method used or how the detector was validated. As a result, the real efficiency of these systems for detecting falls has not been benchmarked (particularly when they are applied to the target population—the elderly)
The major issue with a smartwatch-based FDS is that analyzing wrist motions may lead to overestimates, or an excess of false alarms produced by the jerky activity of the arms and hands, which is not necessarily indicative of the rest of the body's mobility. As a result of the compensatory movements of the hands, the wrist exhibits a completely different mobility pattern during the fall when compared to measurements captured by other body positions, fall-related accelerometry signals may be misinterpreted more frequently as those originated by other ADLs and vice versa when the inertial sensor is placed on the wrist.
However, most smartwatches have significant battery and computational power limitations. In fact, autonomy, along with tiny displays, has long been seen as two of the most significant obstacles to wristwatch adoption in health monitoring applications targeted for the elderly. The amount of sensors and sampling rates used in a smartwatch have a direct relationship with battery consumption. As a result, the main limiting issue for the deployment and acceptance of apps that need continuous signal monitoring is the battery capacity (which is typically much lower than that of smartphones). Most activity recognition systems would be jeopardized if the battery autonomy was less than 24 hours, since movement monitoring would have to be stopped before sleep to recharge the batteries. An extra fall detection (constantly running) program may have a significant effect on the battery life. In fact, according to a recent research based on questionnaires completed by respondents in a real-world testing of a worn fall sensor, consumers prefer devices that can operate for at least 6 months before needing to charge a battery.
We examine commercially accessible Smart Watches, where adoption is still in its early stages. With the Smart watches accessing of the social network become faster. Smart watches reduce the number of times and effort of the user in pulling out phone. With the smart watches the calls and alerts are less likely to be missed. The features of the camera, Data may be synchronised with smart phones. Touch interface dominant makes easier to navigate, many use the mixture of touch and buttons are in the smart watches.[1], [10], [11]
|
Answer the questions from only the provided text. Do not use any external resources or prior knowledge. Explain your answer but do not exceed 250 words per answer. | My family has been grazing our cattle on federal government land that is not U.S. Fish and Wildlife Service or a National Park for 75 years that has been banned from being used for geothermal leasing. Do we have protected rights to keep grazing our cattle on that land? | Lands and interest in lands owned by the United States (i.e., federal lands) have been withdrawn
from agency management under various public land laws. Federal land withdrawals typically
seek to preclude lands from being used for certain purposes (i.e., withdraw them)in order to
dedicate them to other purposes or to maintain other public values. For example, some laws
established or expanded federal land designations, such as wilderness areas or units of the
National Park System, and withdrew the lands apparently to foster the primary purposes of these
designations. Withdrawals affect lands managed by agencies including the four major land
management agencies: the Bureau of Land Management (BLM), U.S. Fish and Wildlife Service
(FWS), and National Park Service (NPS), all in the Department of the Interior, and the U.S.
Forest Service (FS), in the Department of Agriculture.
The first component of the example provision generally would bar third parties from applying to take ownership and
obtaining possession of the lands or resources on the lands under public land laws. However, the lack of a comprehensive list
of public land laws—and the lack of a single, consistent definition of the term public land laws itself over time—makes it
challenging to determine the precise meaning and applicability. The second component generally would prevent the
withdrawn lands from being available for new mining (e.g., under theGeneral Mining Law of 1872). The third component
generally would prevent the withdrawn lands from being available for new mineral leasing, sale of mineral materials, and
geothermal leasing (e.g., under the Mineral Leasing Act of 1920, Materials Act of 1947, and Geothermal Steam Act of 1970).
Together, the three components primarily would affect BLM and FS, because laws governing lands managed by those
agencies generally allow for energy and mineral development and provide broader authority to convey lands out of federal
ownership than laws governing NPS and FWS lands. Typically, the three components would not bar various surface uses that
otherwise might be allowed, possibly including recreation, hunting, and livestock grazing. However, some uses might be
limited by Congress or by subsequent agency actions, such as amendments to land management plans, if the uses are
inconsistent with the withdrawal’s purposes.
Defining “Valid Existing Rights”
As used in legislated withdrawals, a “valid existing right” is a third-party (i.e., nonfederal)
interest in federal land that the relevant federal agency cannot terminate or unduly limit.82 To have
a valid existing right, the third party must
have met the requirements under the relevant law to obtain a property interest in
the land (i.e., the property interest must be valid);
have had a protectable interest before the United States withdraws the land (i.e.,
the property interest was existing at the time of withdrawal);83 and
possess a property interest (or in some cases a possessory interest) in the land that
constitutes a right for purposes of withdrawals (i.e., it must be a right).84
Valid
The validity of the interest depends on whether the third party has met the requirements of the
law under which it alleges to have secured the property interest. First, the interest itself must be
legitimate (i.e., supported by evidence of the factual basis required by the relevant statute). For
example, to secure a mining claim as a valid right under the mining laws, a claimant must
demonstrate that they have made a “valid discovery” of a valuable mineral deposit that can be
extracted and marketed.
Existing
The second requirement for a third party to have a “valid existing right” is that the property
interest existed at the time of withdrawal.90 Depending on the legal basis for the right, a third
party obtains an interest in federal land either (1) once they meet the statutory requirements,
without the federal agency having to act, or (2) when the federal agency exercises its discretion to
grant the property interest after the third party meets the relevant statutory requirements.
91 Third
parties claiming property interests under laws that do not require the federal agency to grant the
interest have an existing property interest as soon as they meet the law’s requirements.92 For
example, a claimant under federal mining laws is entitled to the claim once they complete the
statutory steps described above (discovery and location).93 Whether the Secretary of the Interior
has issued a land patent to transfer title to the claimant does not affect the claimant’s right to the
land; once federal mining law requirements are met, the property right “vests” (i.e., ownership is
transferred to the claimant) and the right exists.
94 In some cases, the claimant need not complete
all of the required steps before the withdrawal to obtain an existing right. If the law allows claims
to relate back to occupancy (i.e., be back-dated to when the claimant first occupied the land),
claimants may have existing rights if they occupied the land before withdrawal and ultimately
complete the remaining steps required by law.95
Other laws provide that a claimant’s interest in federal land only becomes a valid existing right
once the Secretary has acted to make it valid.
96 For example, third parties acquire oil and gas
leases when the Secretary of the Interior approves their application.
97 Although courts and
agencies have recognized these leases as valid existing rights in various contexts, they have not
recognized applications for oil and gas leases or other leasehold interests in federal land.
Courts and agencies have at times concluded that a third party has a valid existing right despite
not having established an interest by law before the land is withdrawn.
99 The Solicitor of the
Department of the Interior has offered “an expansive interpretation of ‘existing valid rights’ in the
context of withdrawal”
100 that includes “all prior valid applications for entry, selection, or
location, which were substantially complete at the date of the withdrawal” and “[c]laims under
the Color of Title Act of December 22, 1928.”101 A court or agency also may recognize a valid
existing right, even if the claimant is not legally entitled to it, because it would be equitable (i.e.,
consistent with the principles of justice).
102
Rights
Not all uses of or interests in federal land qualify as valid existing “rights.” The third party
usually must have obtained a property interest in the land to have a right; merely using the land
generally is insufficient to establish a valid existing right.
103 To determine whether the asserted
interest qualifies as a right, courts and agencies examine the law authorizing the interest and the
withdrawal law.104 Courts and agencies have recognized a number of property interests as
protected rights, such as entitlements to land patents under mining laws and entry-based laws
such as the Homestead Acts and the Trade and Manufacturing Site Act;105 land grants to states;106
rights-of-way;107 and mineral leases.108
Courts and agencies also have deemed certain possessory interests protected, the most common
example being perfected but unpatented mining claims.
109 However, they have declined to
recognize other possessory interests as valid existing rights.110 Courts and agencies have generally
not recognized permits, such as grazing permits, as protected property rights for purposes of
interpreting withdrawals, absent a specific provision in the withdrawal law or order.111
| Answer the questions from only the provided text. Do not use any external resources or prior knowledge. Explain your answer but do not exceed 250 words per answer.
Lands and interest in lands owned by the United States (i.e., federal lands) have been withdrawn
from agency management under various public land laws. Federal land withdrawals typically
seek to preclude lands from being used for certain purposes (i.e., withdraw them)in order to
dedicate them to other purposes or to maintain other public values. For example, some laws
established or expanded federal land designations, such as wilderness areas or units of the
National Park System, and withdrew the lands apparently to foster the primary purposes of these
designations. Withdrawals affect lands managed by agencies including the four major land
management agencies: the Bureau of Land Management (BLM), U.S. Fish and Wildlife Service
(FWS), and National Park Service (NPS), all in the Department of the Interior, and the U.S.
Forest Service (FS), in the Department of Agriculture.
The first component of the example provision generally would bar third parties from applying to take ownership and
obtaining possession of the lands or resources on the lands under public land laws. However, the lack of a comprehensive list
of public land laws—and the lack of a single, consistent definition of the term public land laws itself over time—makes it
challenging to determine the precise meaning and applicability. The second component generally would prevent the
withdrawn lands from being available for new mining (e.g., under theGeneral Mining Law of 1872). The third component
generally would prevent the withdrawn lands from being available for new mineral leasing, sale of mineral materials, and
geothermal leasing (e.g., under the Mineral Leasing Act of 1920, Materials Act of 1947, and Geothermal Steam Act of 1970).
Together, the three components primarily would affect BLM and FS, because laws governing lands managed by those
agencies generally allow for energy and mineral development and provide broader authority to convey lands out of federal
ownership than laws governing NPS and FWS lands. Typically, the three components would not bar various surface uses that
otherwise might be allowed, possibly including recreation, hunting, and livestock grazing. However, some uses might be
limited by Congress or by subsequent agency actions, such as amendments to land management plans, if the uses are
inconsistent with the withdrawal’s purposes.
Defining “Valid Existing Rights”
As used in legislated withdrawals, a “valid existing right” is a third-party (i.e., nonfederal)
interest in federal land that the relevant federal agency cannot terminate or unduly limit.82 To have
a valid existing right, the third party must
have met the requirements under the relevant law to obtain a property interest in
the land (i.e., the property interest must be valid);
have had a protectable interest before the United States withdraws the land (i.e.,
the property interest was existing at the time of withdrawal);83 and
possess a property interest (or in some cases a possessory interest) in the land that
constitutes a right for purposes of withdrawals (i.e., it must be a right).84
Valid
The validity of the interest depends on whether the third party has met the requirements of the
law under which it alleges to have secured the property interest. First, the interest itself must be
legitimate (i.e., supported by evidence of the factual basis required by the relevant statute). For
example, to secure a mining claim as a valid right under the mining laws, a claimant must
demonstrate that they have made a “valid discovery” of a valuable mineral deposit that can be
extracted and marketed.
Existing
The second requirement for a third party to have a “valid existing right” is that the property
interest existed at the time of withdrawal.90 Depending on the legal basis for the right, a third
party obtains an interest in federal land either (1) once they meet the statutory requirements,
without the federal agency having to act, or (2) when the federal agency exercises its discretion to
grant the property interest after the third party meets the relevant statutory requirements.
91 Third
parties claiming property interests under laws that do not require the federal agency to grant the
interest have an existing property interest as soon as they meet the law’s requirements.92 For
example, a claimant under federal mining laws is entitled to the claim once they complete the
statutory steps described above (discovery and location).93 Whether the Secretary of the Interior
has issued a land patent to transfer title to the claimant does not affect the claimant’s right to the
land; once federal mining law requirements are met, the property right “vests” (i.e., ownership is
transferred to the claimant) and the right exists.
94 In some cases, the claimant need not complete
all of the required steps before the withdrawal to obtain an existing right. If the law allows claims
to relate back to occupancy (i.e., be back-dated to when the claimant first occupied the land),
claimants may have existing rights if they occupied the land before withdrawal and ultimately
complete the remaining steps required by law.95
Other laws provide that a claimant’s interest in federal land only becomes a valid existing right
once the Secretary has acted to make it valid.
96 For example, third parties acquire oil and gas
leases when the Secretary of the Interior approves their application.
97 Although courts and
agencies have recognized these leases as valid existing rights in various contexts, they have not
recognized applications for oil and gas leases or other leasehold interests in federal land.
Courts and agencies have at times concluded that a third party has a valid existing right despite
not having established an interest by law before the land is withdrawn.
99 The Solicitor of the
Department of the Interior has offered “an expansive interpretation of ‘existing valid rights’ in the
context of withdrawal”
100 that includes “all prior valid applications for entry, selection, or
location, which were substantially complete at the date of the withdrawal” and “[c]laims under
the Color of Title Act of December 22, 1928.”101 A court or agency also may recognize a valid
existing right, even if the claimant is not legally entitled to it, because it would be equitable (i.e.,
consistent with the principles of justice).
102
Rights
Not all uses of or interests in federal land qualify as valid existing “rights.” The third party
usually must have obtained a property interest in the land to have a right; merely using the land
generally is insufficient to establish a valid existing right.
103 To determine whether the asserted
interest qualifies as a right, courts and agencies examine the law authorizing the interest and the
withdrawal law.104 Courts and agencies have recognized a number of property interests as
protected rights, such as entitlements to land patents under mining laws and entry-based laws
such as the Homestead Acts and the Trade and Manufacturing Site Act;105 land grants to states;106
rights-of-way;107 and mineral leases.108
Courts and agencies also have deemed certain possessory interests protected, the most common
example being perfected but unpatented mining claims.
109 However, they have declined to
recognize other possessory interests as valid existing rights.110 Courts and agencies have generally
not recognized permits, such as grazing permits, as protected property rights for purposes of
interpreting withdrawals, absent a specific provision in the withdrawal law or order.111
My family has been grazing our cattle on federal government land that is not U.S. Fish and Wildlife Service or a National Park for 75 years that has been banned from being used for geothermal leasing. Do we have protected rights to keep grazing our cattle on that land? |
Create your response by referencing the provided text. Limit your response to 100 words. If you cannot answer using the context alone, say "I can't determine the answer without more context." | What's nifedipine? | Aortic Regurgitation
Essentials of Diagnosis
• Causes include congenital bicuspid valve, endocarditis, rheumatic
heart disease, Marfan’s syndrome, aortic dissection, ankylosing
spondylitis, reactive arthritis, and syphilis
• Acute aortic regurgitation: Abrupt onset of pulmonary edema
• Chronic aortic regurgitation: Asymptomatic until middle age,
when symptoms of left heart failure develop insidiously
• Soft, high-pitched, decrescendo holodiastolic murmur in chronic
aortic regurgitation; occasionally, an accompanying apical lowpitched diastolic rumble (Austin Flint murmur) in nonrheumatic
patients; in acute aortic regurgitation, the diastolic murmur can be
short (or not even heard) and harsh
• Acute aortic regurgitation: Reduced S1 and an S3; rales
• Chronic aortic regurgitation: Reduced S1, wide pulse pressure, waterhammer pulse, subungual capillary pulsations (Quincke’s sign),
rapid rise and fall of pulse (Corrigan’s pulse), and a diastolic murmur
over a partially compressed femoral artery (Duroziez’s sign)
• ECG shows left ventricular hypertrophy
• Echo Doppler confirms diagnosis, estimates severity
■ Differential Diagnosis
• Pulmonary hypertension with Graham Steell murmur
• Mitral, or rarely, tricuspid stenosis
• Left ventricular failure due to other cause
• Dock’s murmur of left anterior descending artery stenosis
■ Treatment
• Vasodilators (eg, nifedipine and ACE inhibitors) do not delay the
progression to valve replacement in patients with mild to moderate aortic regurgitation
• In chronic aortic regurgitation, surgery reserved for patients with
symptoms or ejection function < 50% on echocardiography
• Acute regurgitation caused by aortic dissection or endocarditis
requires surgical replacement of the valve
■ Pearl
The Hodgkin-Key murmur of aortic regurgitation is harsh and raspy,
caused by leaflet eventration typical of luetic aortopathy.
Reference
Kamath AR, Varadarajan P, Turk R, Sampat U, Patel R, Khandhar S, Pai RG.
Survival in patients with severe aortic regurgitation and severe left ventricular dysfunction is improved by aortic valve replacement. Circulation 2009;
120(suppl):S134. [PMID: 19752358]
Aortic Stenosis
■ Essentials of Diagnosis
• Causes include congenital bicuspid valve and progressive calcification with aging of a normal three-leaflet valve; rheumatic fever
rarely, if ever, causes isolated aortic stenosis
• Dyspnea, angina, and syncope singly or in any combination;
sudden death in less than 1% of asymptomatic patients
• Weak and delayed carotid pulses (pulsus parvus et tardus); a soft,
absent, or paradoxically split S2; a harsh diamond-shaped systolic ejection murmur to the right of the sternum, often radiating
to the neck, but on occasion heard apically (Gallavardin’s phenomenon)
• Left ventricular hypertrophy by ECG and chest x-ray may show
calcification in the aortic valve
• Echo confirms diagnosis and estimates valve area and gradient;
cardiac catheterization confirms severity if there is discrepancy
between physical exam and echo; concomitant coronary atherosclerotic disease present in 50%
■ Differential Diagnosis
• Mitral regurgitation
• Hypertrophic obstructive or dilated cardiomyopathy
• Atrial or ventricular septal defect
• Syncope due to other causes
• Ischemic heart disease without valvular abnormality
■ Treatment
• Surgery is indicated for all patients with severe aortic stenosis
(mean aortic valve gradient > 40 mm Hg or valve area ≤ 1.0 cm2
)
and the presence of symptoms or ejection fraction < 50%
• Percutaneous balloon valvuloplasty for temporary (6 months)
relief of symptoms in poor surgical candidates
■ Pearl
In many cases, the softer the murmur, the worse the stenosis.
Reference
Dal-Bianco JP, Khandheria BK, Mookadam F, Gentile F, Sengupta PP.
Management of asymptomatic severe aortic stenosis. J Am Coll Cardiol
2008;52:1279. [PMID: 18929238] | Create your response by referencing the provided text. Limit your response to 100 words. If you cannot answer using the context alone, say "I can't determine the answer without more context."
Aortic Regurgitation
Essentials of Diagnosis
• Causes include congenital bicuspid valve, endocarditis, rheumatic
heart disease, Marfan’s syndrome, aortic dissection, ankylosing
spondylitis, reactive arthritis, and syphilis
• Acute aortic regurgitation: Abrupt onset of pulmonary edema
• Chronic aortic regurgitation: Asymptomatic until middle age,
when symptoms of left heart failure develop insidiously
• Soft, high-pitched, decrescendo holodiastolic murmur in chronic
aortic regurgitation; occasionally, an accompanying apical lowpitched diastolic rumble (Austin Flint murmur) in nonrheumatic
patients; in acute aortic regurgitation, the diastolic murmur can be
short (or not even heard) and harsh
• Acute aortic regurgitation: Reduced S1 and an S3; rales
• Chronic aortic regurgitation: Reduced S1, wide pulse pressure, waterhammer pulse, subungual capillary pulsations (Quincke’s sign),
rapid rise and fall of pulse (Corrigan’s pulse), and a diastolic murmur
over a partially compressed femoral artery (Duroziez’s sign)
• ECG shows left ventricular hypertrophy
• Echo Doppler confirms diagnosis, estimates severity
■ Differential Diagnosis
• Pulmonary hypertension with Graham Steell murmur
• Mitral, or rarely, tricuspid stenosis
• Left ventricular failure due to other cause
• Dock’s murmur of left anterior descending artery stenosis
■ Treatment
• Vasodilators (eg, nifedipine and ACE inhibitors) do not delay the
progression to valve replacement in patients with mild to moderate aortic regurgitation
• In chronic aortic regurgitation, surgery reserved for patients with
symptoms or ejection function < 50% on echocardiography
• Acute regurgitation caused by aortic dissection or endocarditis
requires surgical replacement of the valve
■ Pearl
The Hodgkin-Key murmur of aortic regurgitation is harsh and raspy,
caused by leaflet eventration typical of luetic aortopathy.
Reference
Kamath AR, Varadarajan P, Turk R, Sampat U, Patel R, Khandhar S, Pai RG.
Survival in patients with severe aortic regurgitation and severe left ventricular dysfunction is improved by aortic valve replacement. Circulation 2009;
120(suppl):S134. [PMID: 19752358]
Aortic Stenosis
■ Essentials of Diagnosis
• Causes include congenital bicuspid valve and progressive calcification with aging of a normal three-leaflet valve; rheumatic fever
rarely, if ever, causes isolated aortic stenosis
• Dyspnea, angina, and syncope singly or in any combination;
sudden death in less than 1% of asymptomatic patients
• Weak and delayed carotid pulses (pulsus parvus et tardus); a soft,
absent, or paradoxically split S2; a harsh diamond-shaped systolic ejection murmur to the right of the sternum, often radiating
to the neck, but on occasion heard apically (Gallavardin’s phenomenon)
• Left ventricular hypertrophy by ECG and chest x-ray may show
calcification in the aortic valve
• Echo confirms diagnosis and estimates valve area and gradient;
cardiac catheterization confirms severity if there is discrepancy
between physical exam and echo; concomitant coronary atherosclerotic disease present in 50%
■ Differential Diagnosis
• Mitral regurgitation
• Hypertrophic obstructive or dilated cardiomyopathy
• Atrial or ventricular septal defect
• Syncope due to other causes
• Ischemic heart disease without valvular abnormality
■ Treatment
• Surgery is indicated for all patients with severe aortic stenosis
(mean aortic valve gradient > 40 mm Hg or valve area ≤ 1.0 cm2
)
and the presence of symptoms or ejection fraction < 50%
• Percutaneous balloon valvuloplasty for temporary (6 months)
relief of symptoms in poor surgical candidates
■ Pearl
In many cases, the softer the murmur, the worse the stenosis.
Reference
Dal-Bianco JP, Khandheria BK, Mookadam F, Gentile F, Sengupta PP.
Management of asymptomatic severe aortic stenosis. J Am Coll Cardiol
2008;52:1279. [PMID: 18929238]
What's nifedipine? |
Only use the information provided in the below context block to asnwer the question. Your answer should be in paragraph format and no more than 200 words. | What are the key points of Section 455 of the Higher Education Act? | On August 8, 2020, President Trump signed a presidential memorandum expressing his view that payments and interest accrual on student loans should remain suspended past September 30, 2020, “until such time that the economy has stabilized, schools have re-opened, and the crisis brought on by the COVID-19 pandemic has subsided.” The memorandum directs the Secretary of Education to “continue the temporary cessation of payments and the waiver of all interest on student loans held by the Department of Education until December 31, 2020.”
The memorandum cites Section 455(f)(2)(D) of the Higher Education Act (HEA), which allows eligible borrowers to defer certain federally held student loans if they experience economic hardship. Such a deferment temporarily relieves the borrower of an obligation to pay principal installments on the loan. For some (but not all) loans, a deferment also temporarily suspends the accrual of loan interest.
To implement the proposed suspension of payments and interest accrual, the presidential memorandum directs the Secretary of Education “to take action pursuant to applicable law to effectuate appropriate waivers of and modifications to the requirements and conditions of economic hardship deferment described in” HEA Section 455(f)(2)(D). Under HEA Section 435(o)—which Section 455(f)(2)(D) incorporates by reference—a borrower is eligible for an economic hardship deferment if the borrower is (1) working full-time and (2) earning an amount of money that falls below a specified threshold. But HEA Sections 455(f)(2)(D) and 435(o) also authorize the Secretary of Education to promulgate regulations making additional borrowers eligible for an economic hardship deferment. The Secretary of Education previously issued regulations making economic hardship deferments available to certain borrowers who might not otherwise meet the criteria specified in Section 435(o).
The presidential memorandum contemplates that the Secretary of Education will exercise available statutory authorities to further expand economic hardship deferment eligibility to borrowers adversely affected by the COVID-19 pandemic. The presidential memorandum may raise several questions for policymakers. First, as mentioned above, a deferment under Section 455(f) does not suspend interest accrual for all types of student loans. To the contrary, Section 435(f)(1)(B) states that for certain loans, interest “shall accrue and be capitalized or paid by the borrower” during the deferment period. It therefore may be uncertain whether Section 455(f), standing alone, allows the Executive to waive “all interest on student loans held by the Department of Education” as the presidential memorandum contemplates. That said, the presidential memorandum directs the Secretary of Education to take action pursuant to “applicable law” to effectuate the memorandum’s directives. The Trump Administration might attempt to argue that other provisions of federal law give the Secretary of Education the power to waive all interest on student loans held by the Department of Education, even if Section 455 does not.
Second, the memorandum does not explicitly specify who will be eligible for the expanded economic hardship deferments. The memorandum appears to contemplate, however, that the Secretary of Education will make those deferments available to all borrowers who are currently covered by the CARES Act’s payment and interest suspension provisions.
Third, while the presidential memorandum states that “[a]ll persons who wish to continue making student loan payments shall be allowed to do so,” it does not specify whether borrowers will need to apply for the deferments, or if the Secretary of Education will instead automatically grant deferments to eligible borrowers unless they opt out. Under existing regulations, deferment is (with limited exceptions) not automatic; a borrower must usually request a deferment and submit an application containing various documents. Although the Secretary of Education could potentially amend those regulations to automatically grant deferments, doing so could have both advantages and disadvantages. On one hand, dispensing with the requirement that borrowers file an application to receive a deferment could reduce burdens on both borrowers and the federal government. On the other hand, some borrowers might prefer not to receive an automatic deferment, preferring to continue paying off their loans. For instance, some student loan forgiveness programs—such as the Public Service Loan Forgiveness (PSLF) Program— require the borrower to make payments over an extended period to receive relief.
Fourth, Section 3513 of the CARES Act affords borrowers certain types of relief that the presidential memorandum does not mention. For instance:
• Section 3513(e) suspends involuntary collection on student loans covered by the CARES Act’s principal and interest suspension provisions.
• Section 3513(d) affords borrowers certain consumer credit reporting protections during the suspension period.
• Section 3513(c) requires the Secretary of Education to “deem each month for which a loan payment was suspended” under the CARES Act as if the borrower “had made a payment for the purpose of any loan forgiveness program or loan rehabilitation program,” such as the PSLF program.
The memorandum does not expressly address these topics. Notably, however, when the Trump Administration took administrative action in March 2020 to grant relief to student loan borrowers, the Secretary of Education instructed the U.S. Treasury and collection agencies to cease involuntary collection actions and wage garnishments for at least 60 days.
Also, it is presently unclear whether the Secretary of Education will give stakeholders an opportunity to comment on any regulations she might promulgate to implement the memorandum. Federal law ordinarily requires the Secretary of Education to engage in a negotiated rulemaking process with stakeholders and accept and consider public comments before a regulation governing student loans becomes effective. But the Secretary of Education may bypass these procedures when following them would be “impracticable, unnecessary, or contrary to the public interest.” Given the significant and continuing impact of COVID-19 and Section 3513’s impending expiration date, the Secretary of Education might be able to publish regulations to implement aspects of the memorandum that become effective immediately, without first accepting public comment.
| What are the key points of Section 455 of the Higher Education Act?
Only use the information provided in the below context block to asnwer the question. Your answer should be in paragraph format and no more than 200 words.
On August 8, 2020, President Trump signed a presidential memorandum expressing his view that payments and interest accrual on student loans should remain suspended past September 30, 2020, “until such time that the economy has stabilized, schools have re-opened, and the crisis brought on by the COVID-19 pandemic has subsided.” The memorandum directs the Secretary of Education to “continue the temporary cessation of payments and the waiver of all interest on student loans held by the Department of Education until December 31, 2020.”
The memorandum cites Section 455(f)(2)(D) of the Higher Education Act (HEA), which allows eligible borrowers to defer certain federally held student loans if they experience economic hardship. Such a deferment temporarily relieves the borrower of an obligation to pay principal installments on the loan. For some (but not all) loans, a deferment also temporarily suspends the accrual of loan interest.
To implement the proposed suspension of payments and interest accrual, the presidential memorandum directs the Secretary of Education “to take action pursuant to applicable law to effectuate appropriate waivers of and modifications to the requirements and conditions of economic hardship deferment described in” HEA Section 455(f)(2)(D). Under HEA Section 435(o)—which Section 455(f)(2)(D) incorporates by reference—a borrower is eligible for an economic hardship deferment if the borrower is (1) working full-time and (2) earning an amount of money that falls below a specified threshold. But HEA Sections 455(f)(2)(D) and 435(o) also authorize the Secretary of Education to promulgate regulations making additional borrowers eligible for an economic hardship deferment. The Secretary of Education previously issued regulations making economic hardship deferments available to certain borrowers who might not otherwise meet the criteria specified in Section 435(o).
The presidential memorandum contemplates that the Secretary of Education will exercise available statutory authorities to further expand economic hardship deferment eligibility to borrowers adversely affected by the COVID-19 pandemic. The presidential memorandum may raise several questions for policymakers. First, as mentioned above, a deferment under Section 455(f) does not suspend interest accrual for all types of student loans. To the contrary, Section 435(f)(1)(B) states that for certain loans, interest “shall accrue and be capitalized or paid by the borrower” during the deferment period. It therefore may be uncertain whether Section 455(f), standing alone, allows the Executive to waive “all interest on student loans held by the Department of Education” as the presidential memorandum contemplates. That said, the presidential memorandum directs the Secretary of Education to take action pursuant to “applicable law” to effectuate the memorandum’s directives. The Trump Administration might attempt to argue that other provisions of federal law give the Secretary of Education the power to waive all interest on student loans held by the Department of Education, even if Section 455 does not.
Second, the memorandum does not explicitly specify who will be eligible for the expanded economic hardship deferments. The memorandum appears to contemplate, however, that the Secretary of Education will make those deferments available to all borrowers who are currently covered by the CARES Act’s payment and interest suspension provisions.
Third, while the presidential memorandum states that “[a]ll persons who wish to continue making student loan payments shall be allowed to do so,” it does not specify whether borrowers will need to apply for the deferments, or if the Secretary of Education will instead automatically grant deferments to eligible borrowers unless they opt out. Under existing regulations, deferment is (with limited exceptions) not automatic; a borrower must usually request a deferment and submit an application containing various documents. Although the Secretary of Education could potentially amend those regulations to automatically grant deferments, doing so could have both advantages and disadvantages. On one hand, dispensing with the requirement that borrowers file an application to receive a deferment could reduce burdens on both borrowers and the federal government. On the other hand, some borrowers might prefer not to receive an automatic deferment, preferring to continue paying off their loans. For instance, some student loan forgiveness programs—such as the Public Service Loan Forgiveness (PSLF) Program— require the borrower to make payments over an extended period to receive relief.
Fourth, Section 3513 of the CARES Act affords borrowers certain types of relief that the presidential memorandum does not mention. For instance:
• Section 3513(e) suspends involuntary collection on student loans covered by the CARES Act’s principal and interest suspension provisions.
• Section 3513(d) affords borrowers certain consumer credit reporting protections during the suspension period.
• Section 3513(c) requires the Secretary of Education to “deem each month for which a loan payment was suspended” under the CARES Act as if the borrower “had made a payment for the purpose of any loan forgiveness program or loan rehabilitation program,” such as the PSLF program.
The memorandum does not expressly address these topics. Notably, however, when the Trump Administration took administrative action in March 2020 to grant relief to student loan borrowers, the Secretary of Education instructed the U.S. Treasury and collection agencies to cease involuntary collection actions and wage garnishments for at least 60 days.
Also, it is presently unclear whether the Secretary of Education will give stakeholders an opportunity to comment on any regulations she might promulgate to implement the memorandum. Federal law ordinarily requires the Secretary of Education to engage in a negotiated rulemaking process with stakeholders and accept and consider public comments before a regulation governing student loans becomes effective. But the Secretary of Education may bypass these procedures when following them would be “impracticable, unnecessary, or contrary to the public interest.” Given the significant and continuing impact of COVID-19 and Section 3513’s impending expiration date, the Secretary of Education might be able to publish regulations to implement aspects of the memorandum that become effective immediately, without first accepting public comment. |
For this task, you are required to use only the information that is provided in the prompt. You cannot use any outside information or sources. Do not reference any knowledge outside of what is explicitly provided. | What are the criteria that must be met for a precedent to be overruled? | The more difficult question in this case is stare decisis—
that is, whether to overrule the Roe decision.
The principle of stare decisis requires respect for the
6 DOBBS v. JACKSON WOMEN’S HEALTH ORGANIZATION
KAVANAUGH, J., concurring
Court’s precedents and for the accumulated wisdom of the
judges who have previously addressed the same issue.
Stare decisis is rooted in Article III of the Constitution and
is fundamental to the American judicial system and to the
stability of American law.
Adherence to precedent is the norm, and stare decisis imposes a high bar before this Court may overrule a precedent. This Court’s history shows, however, that stare decisis is not absolute, and indeed cannot be absolute.
Otherwise, as the Court today explains, many long-sinceoverruled cases such as Plessy v. Ferguson, 163 U. S. 537
(1896); Lochner v. New York, 198 U. S. 45 (1905); Minersville School Dist. v. Gobitis, 310 U. S. 586 (1940); and Bowers v. Hardwick, 478 U. S. 186 (1986), would never have
been overruled and would still be the law.
In his canonical Burnet opinion in 1932, Justice Brandeis
stated that in “cases involving the Federal Constitution,
where correction through legislative action is practically
impossible, this Court has often overruled its earlier decisions.” Burnet v. Coronado Oil & Gas Co., 285 U. S. 393,
406−407 (1932) (dissenting opinion). That description of
the Court’s practice remains accurate today. Every current
Member of this Court has voted to overrule precedent. And
over the last 100 years beginning with Chief Justice Taft’s
appointment in 1921, every one of the 48 Justices appointed
to this Court has voted to overrule precedent. Many of
those Justices have voted to overrule a substantial number
of very significant and longstanding precedents. See, e.g.,
Obergefell v. Hodges, 576 U. S. 644 (2015) (overruling Baker
v. Nelson); Brown v. Board of Education, 347 U. S. 483
(1954) (overruling Plessy v. Ferguson); West Coast Hotel Co.
v. Parrish, 300 U. S. 379 (1937) (overruling Adkins v. Children’s Hospital of D. C. and in effect Lochner v. New York).
But that history alone does not answer the critical question: When precisely should the Court overrule an erroneous constitutional precedent? The history of stare decisis in
Cite as: 597 U. S. ____ (2022) 7
KAVANAUGH, J., concurring
this Court establishes that a constitutional precedent may
be overruled only when (i) the prior decision is not just
wrong, but is egregiously wrong, (ii) the prior decision has
caused significant negative jurisprudential or real-world
consequences, and (iii) overruling the prior decision would
not unduly upset legitimate reliance interests. See Ramos
v. Louisiana, 590 U. S. ___, ___−___ (2020) (KAVANAUGH, J.,
concurring in part) (slip op., at 7−8).
Applying those factors, I agree with the Court today that
Roe should be overruled. The Court in Roe erroneously assigned itself the authority to decide a critically important
moral and policy issue that the Constitution does not grant
this Court the authority to decide. As Justice Byron White
succinctly explained, Roe was “an improvident and extravagant exercise of the power of judicial review” because
“nothing in the language or history of the Constitution” supports a constitutional right to abortion. Bolton, 410 U. S.,
at 221−222 (dissenting opinion).
Of course, the fact that a precedent is wrong, even egregiously wrong, does not alone mean that the precedent
should be overruled. But as the Court today explains, Roe
has caused significant negative jurisprudential and realworld consequences. By taking sides on a difficult and contentious issue on which the Constitution is neutral, Roe
overreached and exceeded this Court’s constitutional authority; gravely distorted the Nation’s understanding of
this Court’s proper constitutional role; and caused significant harm to what Roe itself recognized as the State’s “important and legitimate interest” in protecting fetal life. 410
U. S., at 162. All of that explains why tens of millions of
Americans—and the 26 States that explicitly ask the Court
to overrule Roe—do not accept Roe even 49 years later.
Under the Court’s longstanding stare decisis principles, Roe
8 DOBBS v. JACKSON WOMEN’S HEALTH ORGANIZATION
KAVANAUGH, J., concurring
should be overruled.3
But the stare decisis analysis here is somewhat more
complicated because of Casey. In 1992, 19 years after Roe,
Casey acknowledged the continuing dispute over Roe. The
Court sought to find common ground that would resolve the
abortion debate and end the national controversy. After
careful and thoughtful consideration, the Casey plurality
reaffirmed a right to abortion through viability (about 24
weeks), while also allowing somewhat more regulation of
abortion than Roe had allowed.4
I have deep and unyielding respect for the Justices who
wrote the Casey plurality opinion. And I respect the Casey
plurality’s good-faith effort to locate some middle ground or
compromise that could resolve this controversy for America.
But as has become increasingly evident over time, Casey’s
—————— 3 I also agree with the Court’s conclusion today with respect to reliance.
Broad notions of societal reliance have been invoked in support of Roe,
but the Court has not analyzed reliance in that way in the past. For
example, American businesses and workers relied on Lochner v. New
York, 198 U. S. 45 (1905), and Adkins v. Children’s Hospital of D. C., 261
U. S. 525 (1923), to construct a laissez-faire economy that was free of
substantial regulation. In West Coast Hotel Co. v. Parrish, 300 U. S. 379
(1937), the Court nonetheless overruled Adkins and in effect Lochner.
An entire region of the country relied on Plessy v. Ferguson, 163 U. S.
537 (1896), to enforce a system of racial segregation. In Brown v. Board
of Education, 347 U. S. 483 (1954), the Court overruled Plessy. Much of
American society was built around the traditional view of marriage that
was upheld in Baker v. Nelson, 409 U. S. 810 (1972), and that was reflected in laws ranging from tax laws to estate laws to family laws. In
Obergefell v. Hodges, 576 U. S. 644 (2015), the Court nonetheless overruled Baker. 4As the Court today notes, Casey’s approach to stare decisis pointed in
two directions. Casey reaffirmed Roe’s viability line, but it expressly
overruled the Roe trimester framework and also expressly overruled two
landmark post-Roe abortion cases—Akron v. Akron Center for Reproductive Health, Inc., 462 U. S. 416 (1983), and Thornburgh v. American College of Obstetricians and Gynecologists, 476 U. S. 747 (1986). See Casey,
505 U. S., at 870, 872−873, 878−879, 882. Casey itself thus directly contradicts any notion of absolute stare decisis in abortion cases.
Cite as: 597 U. S. ____ (2022) 9
KAVANAUGH, J., concurring
well-intentioned effort did not resolve the abortion debate.
The national division has not ended. In recent years, a significant number of States have enacted abortion restrictions that directly conflict with Roe. Those laws cannot
be dismissed as political stunts or as outlier laws. Those
numerous state laws collectively represent the sincere and
deeply held views of tens of millions of Americans who continue to fervently believe that allowing abortions up to 24
weeks is far too radical and far too extreme, and does not
sufficiently account for what Roe itself recognized as the
State’s “important and legitimate interest” in protecting fetal life. 410 U. S., at 162. In this case, moreover, a majority
of the States—26 in all—ask the Court to overrule Roe and
return the abortion issue to the States.
In short, Casey’s stare decisis analysis rested in part on a
predictive judgment about the future development of state
laws and of the people’s views on the abortion issue. But
that predictive judgment has not borne out. As the Court
today explains, the experience over the last 30 years conflicts with Casey’s predictive judgment and therefore undermines Casey’s precedential force.5 | System instruction: For this task, you are required to use only the information that is provided in the prompt. You cannot use any outside information or sources. Do not reference any knowledge outside of what is explicitly provided.
Question: What are the criteria that must be met for a precedent to be overruled?
Context: The more difficult question in this case is stare decisis—
that is, whether to overrule the Roe decision.
The principle of stare decisis requires respect for the
6 DOBBS v. JACKSON WOMEN’S HEALTH ORGANIZATION
KAVANAUGH, J., concurring
Court’s precedents and for the accumulated wisdom of the
judges who have previously addressed the same issue.
Stare decisis is rooted in Article III of the Constitution and
is fundamental to the American judicial system and to the
stability of American law.
Adherence to precedent is the norm, and stare decisis imposes a high bar before this Court may overrule a precedent. This Court’s history shows, however, that stare decisis is not absolute, and indeed cannot be absolute.
Otherwise, as the Court today explains, many long-sinceoverruled cases such as Plessy v. Ferguson, 163 U. S. 537
(1896); Lochner v. New York, 198 U. S. 45 (1905); Minersville School Dist. v. Gobitis, 310 U. S. 586 (1940); and Bowers v. Hardwick, 478 U. S. 186 (1986), would never have
been overruled and would still be the law.
In his canonical Burnet opinion in 1932, Justice Brandeis
stated that in “cases involving the Federal Constitution,
where correction through legislative action is practically
impossible, this Court has often overruled its earlier decisions.” Burnet v. Coronado Oil & Gas Co., 285 U. S. 393,
406−407 (1932) (dissenting opinion). That description of
the Court’s practice remains accurate today. Every current
Member of this Court has voted to overrule precedent. And
over the last 100 years beginning with Chief Justice Taft’s
appointment in 1921, every one of the 48 Justices appointed
to this Court has voted to overrule precedent. Many of
those Justices have voted to overrule a substantial number
of very significant and longstanding precedents. See, e.g.,
Obergefell v. Hodges, 576 U. S. 644 (2015) (overruling Baker
v. Nelson); Brown v. Board of Education, 347 U. S. 483
(1954) (overruling Plessy v. Ferguson); West Coast Hotel Co.
v. Parrish, 300 U. S. 379 (1937) (overruling Adkins v. Children’s Hospital of D. C. and in effect Lochner v. New York).
But that history alone does not answer the critical question: When precisely should the Court overrule an erroneous constitutional precedent? The history of stare decisis in
Cite as: 597 U. S. ____ (2022) 7
KAVANAUGH, J., concurring
this Court establishes that a constitutional precedent may
be overruled only when (i) the prior decision is not just
wrong, but is egregiously wrong, (ii) the prior decision has
caused significant negative jurisprudential or real-world
consequences, and (iii) overruling the prior decision would
not unduly upset legitimate reliance interests. See Ramos
v. Louisiana, 590 U. S. ___, ___−___ (2020) (KAVANAUGH, J.,
concurring in part) (slip op., at 7−8).
Applying those factors, I agree with the Court today that
Roe should be overruled. The Court in Roe erroneously assigned itself the authority to decide a critically important
moral and policy issue that the Constitution does not grant
this Court the authority to decide. As Justice Byron White
succinctly explained, Roe was “an improvident and extravagant exercise of the power of judicial review” because
“nothing in the language or history of the Constitution” supports a constitutional right to abortion. Bolton, 410 U. S.,
at 221−222 (dissenting opinion).
Of course, the fact that a precedent is wrong, even egregiously wrong, does not alone mean that the precedent
should be overruled. But as the Court today explains, Roe
has caused significant negative jurisprudential and realworld consequences. By taking sides on a difficult and contentious issue on which the Constitution is neutral, Roe
overreached and exceeded this Court’s constitutional authority; gravely distorted the Nation’s understanding of
this Court’s proper constitutional role; and caused significant harm to what Roe itself recognized as the State’s “important and legitimate interest” in protecting fetal life. 410
U. S., at 162. All of that explains why tens of millions of
Americans—and the 26 States that explicitly ask the Court
to overrule Roe—do not accept Roe even 49 years later.
Under the Court’s longstanding stare decisis principles, Roe
8 DOBBS v. JACKSON WOMEN’S HEALTH ORGANIZATION
KAVANAUGH, J., concurring
should be overruled.3
But the stare decisis analysis here is somewhat more
complicated because of Casey. In 1992, 19 years after Roe,
Casey acknowledged the continuing dispute over Roe. The
Court sought to find common ground that would resolve the
abortion debate and end the national controversy. After
careful and thoughtful consideration, the Casey plurality
reaffirmed a right to abortion through viability (about 24
weeks), while also allowing somewhat more regulation of
abortion than Roe had allowed.4
I have deep and unyielding respect for the Justices who
wrote the Casey plurality opinion. And I respect the Casey
plurality’s good-faith effort to locate some middle ground or
compromise that could resolve this controversy for America.
But as has become increasingly evident over time, Casey’s
—————— 3 I also agree with the Court’s conclusion today with respect to reliance.
Broad notions of societal reliance have been invoked in support of Roe,
but the Court has not analyzed reliance in that way in the past. For
example, American businesses and workers relied on Lochner v. New
York, 198 U. S. 45 (1905), and Adkins v. Children’s Hospital of D. C., 261
U. S. 525 (1923), to construct a laissez-faire economy that was free of
substantial regulation. In West Coast Hotel Co. v. Parrish, 300 U. S. 379
(1937), the Court nonetheless overruled Adkins and in effect Lochner.
An entire region of the country relied on Plessy v. Ferguson, 163 U. S.
537 (1896), to enforce a system of racial segregation. In Brown v. Board
of Education, 347 U. S. 483 (1954), the Court overruled Plessy. Much of
American society was built around the traditional view of marriage that
was upheld in Baker v. Nelson, 409 U. S. 810 (1972), and that was reflected in laws ranging from tax laws to estate laws to family laws. In
Obergefell v. Hodges, 576 U. S. 644 (2015), the Court nonetheless overruled Baker. 4As the Court today notes, Casey’s approach to stare decisis pointed in
two directions. Casey reaffirmed Roe’s viability line, but it expressly
overruled the Roe trimester framework and also expressly overruled two
landmark post-Roe abortion cases—Akron v. Akron Center for Reproductive Health, Inc., 462 U. S. 416 (1983), and Thornburgh v. American College of Obstetricians and Gynecologists, 476 U. S. 747 (1986). See Casey,
505 U. S., at 870, 872−873, 878−879, 882. Casey itself thus directly contradicts any notion of absolute stare decisis in abortion cases.
Cite as: 597 U. S. ____ (2022) 9
KAVANAUGH, J., concurring
well-intentioned effort did not resolve the abortion debate.
The national division has not ended. In recent years, a significant number of States have enacted abortion restrictions that directly conflict with Roe. Those laws cannot
be dismissed as political stunts or as outlier laws. Those
numerous state laws collectively represent the sincere and
deeply held views of tens of millions of Americans who continue to fervently believe that allowing abortions up to 24
weeks is far too radical and far too extreme, and does not
sufficiently account for what Roe itself recognized as the
State’s “important and legitimate interest” in protecting fetal life. 410 U. S., at 162. In this case, moreover, a majority
of the States—26 in all—ask the Court to overrule Roe and
return the abortion issue to the States.
In short, Casey’s stare decisis analysis rested in part on a
predictive judgment about the future development of state
laws and of the people’s views on the abortion issue. But
that predictive judgment has not borne out. As the Court
today explains, the experience over the last 30 years conflicts with Casey’s predictive judgment and therefore undermines Casey’s precedential force.5 |
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | Discuss the purpose of Artificial Intelligence in relationship with the financial industry as outlined in this article. Keep the response under 200 words and do not use the word technology | Artificial Intelligence and Machine Learning in
Financial Services
The financial industry’s adoption of artificial intelligence (AI) and machine learning (ML) is
evolving as financial firms employ ever greater levels of technology and automation to deliver
services. Expanding on earlier models of quantitative analysis, AI/ML has often been adopted in
finance to solve discrete challenges, such as maximizing profit and minimizing risk. Yet the
industry’s adoption of the newer technology also occurs against perceptions that are steeped in
tradition and historical financial regulation, and regulators want to ensure that the technology
does not sidestep regulations frequently described as technology neutral.
Technological advances in computer hardware, capacity, and data storage—which permit the collection and analysis of
data—helped fuel the development and use of AI/ML technologies in finance. Unlike older algorithms that automated
human-coded rules, new AI models can “learn” by themselves and make inferences and recommendations not identified by
modelers in advance. This shift in technology has also enabled the use of new types of data including alternative data (i.e.,
data that the consumer credit bureaus do not traditionally use), unstructured data (images or social media posts, etc.), and
unlabeled information data—which, when combined, extend the technologies’ uses to new financial services or products.
Different parts of the financial services industry have adopted AI/ML technology to varying degrees and for various
purposes. Some uses of AI/ML include powering chatbots in customer service functions, identifying investment opportunities
and/or executing trades, augmenting lending models or (more sparingly) making lending decisions, and identifying and
preventing fraud. The extent to which a sector or firm adopts various technologies reflects a variety of factors, including a
firm’s ability to fund internal development and regulatory requirements.
The increased use of AI/ML to deliver financial services has attracted attention and led to numerous policy issues and
subsequent policy actions. Such policy actions culminated in (1) the establishment of a task force on AI in the 116
th Congress
and the more recent working group in the House Committee on Financial Services in the 118th and (2) 2019 and 2023
executive orders. The evolving legislative and regulatory framework regarding AI/ML use in finance is likely, at least in part,
to influence the development of AI/ML financial services applications. Various financial regulators have indicated that
regulated entities are subject to the full range of laws and regulations regardless of the technology used. Additionally, some
regulators have identified regulations and issued guidance of particular relevance to financial firms employing AI/ML
technologies.
Financial industry policymakers face competing pressures. Financial service providers and technology companies are likely
to continue adopting and promoting AI/ML to save time and money and promote accessibility, accuracy, and regulatory
compliance. However, challenges and risks in the form of bias, potential for systemic risk and manipulation, affordability,
and consequences for employment remain. Determining whether the existing regulatory structure is sufficient—or whether
one that is more closely tailored to the technological capacities of the evolving technology is necessary—has emerged as a
key consideration. Should Congress consider the legislative framework governing AI/ML in finance, industry and consumers
alike will expect that it weighs the benefits of innovation with existing and potential future challenges and risks. | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
Discuss the purpose of Artificial Intelligence in relationship with the financial industry as outlined in this article. Keep the response under 200 words and do not use the word technology
<TEXT>
Artificial Intelligence and Machine Learning in
Financial Services
The financial industry’s adoption of artificial intelligence (AI) and machine learning (ML) is
evolving as financial firms employ ever greater levels of technology and automation to deliver
services. Expanding on earlier models of quantitative analysis, AI/ML has often been adopted in
finance to solve discrete challenges, such as maximizing profit and minimizing risk. Yet the
industry’s adoption of the newer technology also occurs against perceptions that are steeped in
tradition and historical financial regulation, and regulators want to ensure that the technology
does not sidestep regulations frequently described as technology neutral.
Technological advances in computer hardware, capacity, and data storage—which permit the collection and analysis of
data—helped fuel the development and use of AI/ML technologies in finance. Unlike older algorithms that automated
human-coded rules, new AI models can “learn” by themselves and make inferences and recommendations not identified by
modelers in advance. This shift in technology has also enabled the use of new types of data including alternative data (i.e.,
data that the consumer credit bureaus do not traditionally use), unstructured data (images or social media posts, etc.), and
unlabeled information data—which, when combined, extend the technologies’ uses to new financial services or products.
Different parts of the financial services industry have adopted AI/ML technology to varying degrees and for various
purposes. Some uses of AI/ML include powering chatbots in customer service functions, identifying investment opportunities
and/or executing trades, augmenting lending models or (more sparingly) making lending decisions, and identifying and
preventing fraud. The extent to which a sector or firm adopts various technologies reflects a variety of factors, including a
firm’s ability to fund internal development and regulatory requirements.
The increased use of AI/ML to deliver financial services has attracted attention and led to numerous policy issues and
subsequent policy actions. Such policy actions culminated in (1) the establishment of a task force on AI in the 116
th Congress
and the more recent working group in the House Committee on Financial Services in the 118th and (2) 2019 and 2023
executive orders. The evolving legislative and regulatory framework regarding AI/ML use in finance is likely, at least in part,
to influence the development of AI/ML financial services applications. Various financial regulators have indicated that
regulated entities are subject to the full range of laws and regulations regardless of the technology used. Additionally, some
regulators have identified regulations and issued guidance of particular relevance to financial firms employing AI/ML
technologies.
Financial industry policymakers face competing pressures. Financial service providers and technology companies are likely
to continue adopting and promoting AI/ML to save time and money and promote accessibility, accuracy, and regulatory
compliance. However, challenges and risks in the form of bias, potential for systemic risk and manipulation, affordability,
and consequences for employment remain. Determining whether the existing regulatory structure is sufficient—or whether
one that is more closely tailored to the technological capacities of the evolving technology is necessary—has emerged as a
key consideration. Should Congress consider the legislative framework governing AI/ML in finance, industry and consumers
alike will expect that it weighs the benefits of innovation with existing and potential future challenges and risks.
https://crsreports.congress.gov/product/pdf/R/R47997 |
Create a short paragraph response to the question using clear, precise vocabulary. This should only rely on information contained in the text. | What is Actor Network Theory, and how does it help us understand the failure of Google Glass? | Analysis of the Google Glass Failure and Why Things May Be Different Now
A Research Paper submitted to the Department of Engineering and Society
Presented to the Faculty of the School of Engineering and Applied Science
University of Virginia • Charlottesville, Virginia
In Partial Fulfillment of the Requirements for the Degree
Bachelor of Science, School of Engineering
Tyler Labiak
Spring, 2021
On my honor as a University Student, I have neither given nor received
unauthorized aid on this assignment as defined by the Honor Guidelines
for Thesis-Related Assignments
Signature __________________________________________ Date __________
Tyler Labiak
Approved __________________________________________ Date __________
Sharon Tsai-hsuan Ku, Department of Engineering and Society
5/8/2021
Introduction
As technology continues to advance at breakneck speeds into the unknown, humans are
increasingly defined by their creations. Inventions alter history, mediate human-perception,
deepen (or obscure) knowledge, and modify socialization. Also, throughout history, technology
has come to exist through human political, economic, cultural, and social factors (Law, 1987).
To best understand and guide the development of technology, and consequently humanity, much
work has been done researching the social means by which technology comes to exist and,
inversely, the effects of technology on society.
Of course, the human drivers behind technology’s development and adoption are not
static. Social constructs like privacy, data ethics, safety standards, and social norms change over
time as society changes and, consequently, as technology changes. Therefore, technology must
be evaluated in the context of its creation and usage. This paper hopes to highlight this temporal
element in analyzing technology in the context of a dynamic society.
Google Glass is a device that society rejected not as a bad piece of technology, but rather
as a socio-technical artifact. The reality of Google Glass is that its engineers did not consciously
design the human-technological interaction that they were creating and failed to see how the
product would affect social interactions and perceptions of privacy. As a result, there was
backlash against the product leading to its failure. However, today’s attitudes surrounding
technology and privacy have further laxed; technological advances have shaped a sociotechnical
context where Glass may succeed today or in the future. This paper utilizes Actor Network
Theory to demonstrate how Google failed to coalesce a human, non-human network in
developing Glass, expanding on prior work to show how the conditions surrounding Glass have
evolved with time. To achieve the above conclusions, this paper analyzes media and primary
sources from the time of release of Glass, academic and retrospective journalism pertaining to
the failure of Glass, interviews with non-experts and experts about this technology, and current
Glass enthusiasts via the Google Glass subreddit.
Literature Review
In April 2013 Google began accepting applications for the public to purchase a pair of
smart glasses that Google believed was a major step in the direction of their dream “that
computers and the Internet will be accessible anywhere and we can ask them to do things without
lifting a finger” (Miller, 2013). This was the Explorer version of Google Glass, outfitted with a
small screen and camera, and connected to a smartphone and the internet over Bluetooth or Wifi
(Miller, 2013). Essentially a beta test for developers, the purpose of the “Explorer program [was]
to find out how people want to (and will) use Glass” (Topolsky, 2013). The expectations around
Google Glass were massive, with Business Insider (2013) expecting a $10.5 billion dollar
opportunity for Google as unit sales would increase and the price would decrease until Glass was
the next “ubiquitous” technology. However, the glasses failed spectacularly with media citing
that Google overpromised and underdelivered (Yoon, 2018). Of course, this does not tell the
entire story.
Many people will not know that Google Glass still exists in the form of Glass Enterprise.
Google rebranded the tech to sell to manufacturing, healthcare, and logistics businesses for a
workplace hands-off augmented reality computer (“Glass”, 2021). Similarly, Microsoft Hololens
allows a headset based industrial mixed reality solution (“Hololens”, 2021). So, if these
technologies have proven themselves in a commercial space, what went wrong in the public
setting? During Glass’s Explorer phase there was a slew of privacy concerns associated with the
fact that wearing Glass meant wielding a camera at all times. To some, Google Glass was a rare
example of people pushing back against big tech regarding privacy. People were kicked out of
bars because of the recording aspect, the NYT ran a front-page story about privacy concerns,
activists set up groups to push back against the product, and policies were implemented that
forbid people from taking pictures without consent (Eveleth, 2018). Kudina and Verbeek (2019)
explored how Glass mediated the value of privacy by analyzing YouTube comments from the
time of release. However, there is little consideration given to the temporal aspects of sociotechnical interaction. It is essential that Glass is examined, not only in the context of its release,
but also with respect to changing norms, human perceptions, and technologies. Without asking
these questions, we remain unprepared to answer whether a similar technology could succeed
today or in the future.
“Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value
of Privacy” by Olya Kudina and Paul-Peter Verbeek (2019) examines online discussions about
Google Glass, particularly comments on a YouTube video produced by Google, in order to
understand “how people articulate new meanings of the value of privacy.” This case study serves
as a demonstration of Verbeek’s own Theory of Technological Mediation, which allows a focus
on “the dynamics of the interaction between technologies and human values” as a way of
addressing the Collingridge Dilemma, which applied here says that when a technology is young
it is unknown how it will affect systems, and that by the time the morality surrounding
technology is clear, it is difficult to develop the already widespread technology.
According to mediation theory, engineers design not just products, but they design
human-technological interactions in the world. Technology acts as a mediator, shaping personal
experiences and objects while humans and tech are not separate, but affect each other in their
relations. Rather than speculating about the future, “it studies the dynamics of technomoral
change itself.” While Verbeek’s paper serves as a launch point for human perception around the
time of Glass’s release, and is drawn upon greatly in the below analysis, the data set is of course
not representative of today’s cultural technological landscape. Therefore, this paper hopes to
extend on this work in describing not just Glass’s initial rejection given its social context at the
time, but also inspect perceptions of the technology today.
Conceptual Frameworks and Research Methods
This paper draws mainly on Darryl Cressman’s (2009) overview of Actor Network
Theory and the following definitions are derived from his work unless otherwise cited. In Actor
Network Theory everything, both human and non-human can be viewed as both an actor and a
network. These actor networks are therefore sociotechnical in nature, and they are sometimes
referred to as heterogenous networks. A network is defined by the associations it describes;
therefore, power of the network and association are intertwined. Additionally, power and
meaning are not inherent to any single actor within a network, rather they are associative,
relational and contextual. When that actor becomes part of another network its associations
change, and as a result its power or meaning changes. Meaning is ascribed to actors with a
network contextually rather innately (Cressman, 2009).
Engineers in ANT practice heterogeneous engineering, assembling actor networks that
are both human and technical in nature. To understand how the world works, practitioners of
ANT must understand how economic, political, social, and technological understanding interact
with each other. In contrast to other STS theories, ANT is symmetrical in the influence of both
the technical and nontechnical (Cressman, 2009).
Technological innovation comes from the process in ANT known as translation. This is
the process by which both the social and technical actors are recruited into a network. This does
not happen at once, rather actors are recruited in a gradient as the network gradually becomes
more robust. In understanding the world through ANT, there is an emphasis on connections
rather than the individual, and these connections are not all equal (Cressman, 2009).
The conclusion of Actor Network Theory is that for a network to succeed, an engineer
must consider all actors human, nonhuman, technical, political, economic, social, etc. Engineers
are therefore world builders (Law, 1987), and recruiting actors to make a socially robust network
is the triumph of a network. Neglecting the social aspects, or encountering rogue actors, leads to
a failed network. It will be shown that this is exactly how Google failed as a network builder;
thus, the tools of ANT were chosen to explore this dynamic.
In addition to the academic papers cited and journalistic releases analyzed below, two
means of research were also applied. In order to gain a sense of how potential users today
perceive Google Glass or similar technology, interviews were conducted on a group of nonexperts and peers, as well as one industry expert, and enthusiasts of the technology were gauged
via posts on the Google Glass enthusiast subreddit “r/googleglass”.
The purpose of the interviews was not to poll a representative set of the opinions
surrounding Glass, rather to guide research and find some interesting perspectives surrounding
the technology and privacy today. Subjects were aged 22 to 57 and varied in occupation,
education and sex. All interviewees could be broadly incorporated in a target audience for
Google, as all of them consume smart technology already. The interviewees were asked what
they knew about Google Glass, then asked a set of questions about smart technology, ubiquitous
recording, privacy, and legality, and finally were asked to give their opinion on the viability of a
product like Glass coming back in the future. Interviewees’ knowledge regarding Glass ranged
from having never heard of the product, to familiarity, to an industry insider who markets
technology products for Apple and has worked with former Glass developers. It is important to
emphasize that these interviewees were not randomly selected and do not number great enough
to act as a focus group of sorts for the product, but they guided research and topic discovery as
well as offer a scope of perspectives in this area.
The second group considered are users of the subreddit “r/googleglass.” This is an
enthusiast forum dedicated to the technology. The studied materials consisted of the forum “Top,
This Year” as of March 2021, meaning that these posts were the most “upvoted,” or received the
most positive interactions on that forum for the year. These posts were chosen because they
represent the most popular current opinions and perceptions from the enthusiast community.
Analysis
In the case of Glass, Google serves as the network builder in assembling Glass, not just
the technology but also the human technical interactions. In ANT, technology and society are
dual, in that they exert influence on and shape each other. This is also articulated by Verbeek
through his Technological Mediation framework, claiming that Glass mediates privacy itself. In
the below section, media from the time of Glass’s release will be analyzed to see how the
associations of Glass with these rogue actors lead to the destabilization of the network all
together.
Moving chronologically, the first article analyzed is from February 2013, nearly two
months prior to the launch of the explorer version of Glass. A writer from The Verge met with
Google Glass lead designers for an early test of the product. The author of the article is, as
expected of a tech magazine writer, very optimistic about Google Glass noting the “tremendous
value and potential.” He praises the design, appearance, functionality and wearability of the
device but also posts an early warning sign about the fate of Glass:
“At one point during my time with Glass, we all went out to navigate to a nearby
Starbucks — the camera crew I’d brought with me came along. As soon as we got inside
however, the employees at Starbucks asked us to stop filming. Sure, no problem. But I
kept the Glass’ video recorder going, all the way through my order and getting my coffee.
Yes, you can see a light in the prism when the device is recording, but I got the
impression that most people had no idea what they were looking at. The cashier seemed
to be on the verge of asking me what I was wearing on my face, but the question never
came. He certainly never asked me to stop filming.” (Topolsky, 2013)
The author is keenly aware of the issues looming for Google Glass, saying in his own
words, “The privacy issue is going to be a big hurdle for Google with Glass”. When he brought
this up to the Glass designers he met with (product director Steve Lee and lead industrial
designer Isabelle Olsson). Their belief was that the explorer program was their way of
developing an etiquette surrounding Glass. The use and misuse of the program would be
monitored by Google and feedback would move the product forward. The author then remarks
“that’s not going to answer questions about what’s right and wrong to do with a camera that
doesn’t need to be held up to take a photo, and often won’t even be noticed by its owner’s
subjects. Will people get comfortable with that? Are they supposed to?” (Topolsky, 2013) From
an ANT perspective, privacy is not just a concept but rather a socio-technical actor existing in
their network. It is equally important for Google to consider how Glass allows people to record
the world and affect others notions of privacy. However, there is almost an apathy here in the
Explorer program. Google acknowledges that through the Explorer program they can develop an
etiquette around Glass, while recruiting people to their network, but without taking an active role
in designing a project that is socially harmonious, their network is unstable. As the author stated,
their tech fails to answer the questions about what people will do with a camera and if that’s
okay.
Google’s technological optimism, or potentially their naivete, comes through perhaps
strongest in an influential New York Times article from the time of release of Google Glass. This
article, titled “Google Glass Picks Up Early Signal: Keep Out” details the negative responses that
Google Glass was getting form various places around the country, and contrasts it with Google’s
reputation for being cavalier around privacy. The article quotes Google’s former CEO Eric
Schmidt in 2009 saying, “If you have something that you don’t want anyone to know, maybe you
shouldn’t be doing it in the first place” (Streitfield, 2013). It is clear that this policy permeates
through to their implementation of Glass, which potentially immortalizes the public realm
through ubiquitous recording, thus making potentially everything known to everyone. A
contributing law expert is quoted as saying “We are all now going to be both the paparazzi and
the paparazzi’s target” (Streitfield, 2013). Furthermore, the article reports that app developers
made photography with glass as discrete as simply winking in one application (Streitfield, 2013).
To many this makes Glass even more intrusive, and although it is unclear if Google would have
allowed a feature like this in their final release, the Explorer program was essential for gradually
recruiting public acceptance into the Glass network. Of course, they failed to do so.
The New York Times article also speaks on a Seattle bar that banned Glass. The owner of
the bar reported to Geekwire that Glass disrupted the private and seedy culture of his bar:
“People want to go there and be not known … and definitely don’t want to be secretly filmed or
videotaped and immediately put on the Internet” (Bishop, 2013). He also notes that “tech geeks”
from Amazon frequent the bar, and he doesn’t want them using Glass inside. This “tech geek
backlash” is another overarching theme regarding these reactionary articles. As one writer put it,
“Google’s core mistake was allowing geeky developers to become the face of Glass” (Constine,
2017). This made recruitment of a more privacy conscious group difficult for Google, since all of
a sudden there was an Us vs the “Glassholes” dynamic.
The Five Point owners, the geeky big tech employees, and its various bargoers represent
a key factor that must be considered when assembling a socially robust network. The
associations surrounding actors in a network are entirely variable and context dependent. Where
Google employees may look favorably on how Glass impacts social dynamics, the same is not
true of all society. The heterogenous engineer of ANT does engineering outside the lab creating a
network that is socially robust, while Google perhaps looks only inward on its own norms.
Kudina and Verbeek’s (2009) paper using Technological Mediation digs deeper into how
Google approached human behavior and Glass. Google called on the best judgement of its users
and published a list of dos and don’ts surrounding Glass and sat back to observe. The author
turns to gauging Glass’s mediation of privacy and social interaction via YouTube comments on
this list of dos and don’ts during the explorer phase of Glass. One conclusion is that “Glass
appears as a mediating boundary object between what commenters consider private even in the
most public places and what is violated when the device is introduced,” and to online
commenters “the privacy of forgetting [is] at stake with Glass.” As a designer, under
Technological Mediation, Google creates the mediations, and perhaps for Glass to succeed they
needed to be aware of what human interactions and perceptions were actually being designed.
This wraps into actor network theory nicely, since under ANT designers are responsible for
recruiting the actors that are both human and nonhuman to a sociotechnical network. The
designers are the builders of society, and Google failed in creating a network that is stable both
socially and technically. A few years later, SnapChat created the spectacles which were smart
glasses that allowed users to record footage and post to snapchat. However, without addressing
the social issues, the same fate occurred and this network too failed as reported by TechCrunch:
“Google Glass tainted the market with its ‘not sure if you’re recording me’ design. Even though
Snap put more obvious recording signal lights on Spectacles, people would still question you
about whether they were on camera. That not only made people uncomfortable being around
Spectacles, but made you feel like a bit of a creep just wearing them” (Constine, 2017).
This is not to say that there is no hope for augmented reality. Google Glass still exists in
the form of Glass Enterprise. Google rebranded the tech to sell to manufacturing and healthcare
businesses. In this space the context is completely different, and the expectation of privacy
spoken about by Verbeek does not exist in the same caliber. Privacy, under ANT, can be
considered an actor-network, since it is defined by humans and technological relations.
Therefore, it is also subject to the contextuality present in ANT and takes on a different meaning
in the workplace. There is already perhaps an expectation of being observed at work, at least to
some extent, and especially in manufacturing settings. Computer surveillance software, security
cameras, and sensors in manufacturing monitor people’s actions already in a way that would be
unacceptable in the private space. From an ANT perspective this clearly represents the idea of
variable meanings of actors in different networks. As a result, Google is able to recruit
companies and people to their enterprise network, where associations with privacy hold different
meanings. Strictly speaking, the technology for these products to exist was never the problem;
rather, Glass, people, and privacy take on different meanings in a business setting and the public
space.
While Glass may have been initially deemed unacceptable by society, human perceptions
are subject to change as a result of the technical landscape that permeates life around us. Much
has changed since 2013 when Glass was released. The privacy paradox states that individuals
may desire privacy, but are willing to exchange it for the benefits of technology (Eveleth, 2018).
What was once considered off limits could potentially now be acceptable, as technology has
shaped our views of what is wrong and right. Evidence of this lies in the still active Reddit group
dedicated to Google Glass. Users here are still developing software for Glass including the top
poster writing about hacking a version of Android onto Glass. There are many posts about
buying and selling the technology as well as discussing alternative smart glasses. The discussions
in the community consist of instructions on how to load firmware updates onto Glass as well as
loading custom apps such as Netflix which is popular among users on the subreddit.
Additionally, there are troubleshooting posts, guides, blogs and external repositories of apps
discussed, linked and posted on the forum. In stark contrast to the YouTube comments analyzed
by Verbeek, these users have embraced Glass and taken on the role of designers in an enthusiast
ecosystem. The general consensus here is also a longing or even optimism about new Google
Glass products, and that the original Glass was “ahead of its time”.
In conducted interviews, while the subjects varied in terms of their individual privacy
beliefs, no subject so far has said that Glass or a similar device should be banned. More
importantly, there was consensus that a product like Glass could or will exist in the future. One
expert who Apple contracts for marketing, said “if Apple released their version of Google Glass
in two years it will be everywhere.” Others, especially young people, saw no problem
whatsoever with the technology, even after having experts’ concerns explained. The justification
was that privacy is already so compromised by technology and governments. Nearly all
interviewees believed that if a product like this came out in the future, and was a commercial
success they would see no problem at all. Like the reddit users, these interviewees are a far cry
from the mainstream privacy worry in 2013.
In observing both Reddit posters and interviewees, one cannot determine for certain
whether Google Glass could reclaim success today. However, these examples show that the
meaning of actor networks can be derived from context that is time dependent as well. Glass is
not innately moral or immoral. Rather, Society’s expectations for privacy, and their morals
surrounding the subject, change with time and the influence of technology. In Actor Network
Theory this is the symmetry of humans affecting technology and technology impacting humans.
While Google failed to realize how these human factors played into their network originally,
perhaps today or in the near future privacy, as its own actor network, will evolve in such a way
that Glass can exist as a socially robust network.
Conclusion
Google Glass is an actor-network that dissolved, not as a result of its technology or any
specific actor, but rather because of the associations and context these human and nonhuman
actors take on in translating the network. In a public space, from the beginning Glass represented
a form of ubiquitous and secret recording, because there was the assumption that anybody at any
time could be caught on the glasses’ camera. Technology mediates human perception, and in this
case, Glass lends new meaning to what is considered private. In an Actor Network framework,
this is a demonstration of the symmetry of human and non-human artifacts influencing each
other, and without these considerations the network was bound to fail. Rather than design a
product that was both social and technical, Google maintained its cavalier approach to privacy,
not considering how people may have reacted to digitalizing the world’s eyesight.
Google employees and glass users at the time were optimistic about the future of the
product, believing that the product would come to be socially acceptable. This captures the idea
of association and context ascribing meaning in ANT. While Glass may have been acceptable in
Silicon Valley, it did not get the same reception in the main stream. Similarly, while Google
could not release this product to the public it has faced success in manufacturing, healthcare, and
logistic settings as have other augmented reality headsets. Again, here privacy and people’s
expectations take on a new meaning in the Google Glass Enterprise actor-network.
Much has changed since the release of Google Glass. It has become an expectation in
these times that users trade free services for their own personal data. We have all had our ideas
of privacy mediated by technology in this way. It may be possible then, that in the current year or
in the future a product like Glass will resurface, as it has done in the manufacturing space. Some
Reddit users, for example, have put Google Glass to use in their own lives, modifying and
distributing updated software for the glasses. The consensus on these forums is that Glass was
ahead of its time, and there is consensus among interviewed potential users that a product like
this could succeed. From an ANT perspective, again it is clear that the context of associations
within the network matter, rather than the individual parts, and these are all dynamic with respect
to time. If a product like Glass was to reach mainstream success, it would not be strictly the
technology, but rather the recruitment of the technology, the human users, and the social norms
of today or the future that yield a stable network.
While Google Glass as a high-profile product failure has been written about extensively,
there is little in the vein of an STS perspective, and that work focuses on the lens of society at
Glasses release date. The efforts of this paper are to provide an example of how the tools of ANT
can be used to not only analyze the building (and failure) of a technology, but also emphasize
how associations change with context, specifically time. These considerations are essential for
understanding not just the deployment of future technologies, but also the transient nature of
social norms.
References
Bishop, T. (2013, March 8). No Google Glasses allowed, declares Seattle dive bar. GeekWire.
https://www.geekwire.com/2013/google-glasses-allowed-declares-seattle-dive-bar/
Constine, J. (2017, October 28). Why Snapchat Spectacles failed. TechCrunch.
https://social.techcrunch.com/2017/10/28/why-snapchat-spectacles-failed/
Cressman, D. (2009). A Brief Overview of Actor-Network Theory: Punctualization, Heterogeneous
Engineering & Translation. https://summit.sfu.ca/item/13593
Eveleth, R (2018). Google Glass Wasn’t a Failure. It Raised Crucial Concerns. WIRED. (n.d.).
Retrieved November 1, 2020, from https://www.wired.com/story/google-glass-reasonableexpectation-of-privacy/
Glass. (n.d.). Glass. Retrieved November 2, 2020, from https://www.google.com/glass/start/
Insider, B. (n.d.). BI INTELLIGENCE FORECAST: Google Glass Will Be An $11 Billion Market By
2018. Business Insider. Retrieved November 1, 2020, from
https://www.businessinsider.com/google-glass-11-billion-market-by-2018-2013-5
Kudina, O., & Verbeek, P.-P. (2019). Ethics from Within: Google Glass, the Collingridge Dilemma,
and the Mediated Value of Privacy. Science, Technology, & Human Values, 44(2), 291–314.
https://doi.org/10.1177/0162243918793711
Law, J. (1987). On the Social Explanation of Technical Change: The Case of the Portuguese Maritime
Expansion. Technology and Culture, 28(2), 227–252. https://doi.org/10.2307/3105566
Microsoft HoloLens | Mixed Reality Technology for Business. (n.d.). Retrieved March 27, 2021, from
https://www.microsoft.com/en-us/hololens
Miller, C. C. (2013, February 21). Google Searches for Style. The New York Times.
https://www.nytimes.com/2013/02/21/technology/google-looks-to-make-its-computer-glassesstylish.html
Streitfeld, D. (2013, May 7). Google Glass Picks Up Early Signal: Keep Out. The New York Times.
https://www.nytimes.com/2013/05/07/technology/personaltech/google-glass-picks-up-earlysignal-keep-out.html
Topolsky, J. (2013, February 22). I used Google Glass: The future, but with monthly updates. The
Verge. https://www.theverge.com/2013/2/22/4013406/i-used-google-glass-its-the-future-withmonthly-updates | Create a short paragraph response to the question using clear, precise vocabulary. This should only rely on information contained in the text.
What is Actor Network Theory, and how does it help us understand the failure of Google Glass?
Analysis of the Google Glass Failure and Why Things May Be Different Now
A Research Paper submitted to the Department of Engineering and Society
Presented to the Faculty of the School of Engineering and Applied Science
University of Virginia • Charlottesville, Virginia
In Partial Fulfillment of the Requirements for the Degree
Bachelor of Science, School of Engineering
Tyler Labiak
Spring, 2021
On my honor as a University Student, I have neither given nor received
unauthorized aid on this assignment as defined by the Honor Guidelines
for Thesis-Related Assignments
Signature __________________________________________ Date __________
Tyler Labiak
Approved __________________________________________ Date __________
Sharon Tsai-hsuan Ku, Department of Engineering and Society
5/8/2021
Introduction
As technology continues to advance at breakneck speeds into the unknown, humans are
increasingly defined by their creations. Inventions alter history, mediate human-perception,
deepen (or obscure) knowledge, and modify socialization. Also, throughout history, technology
has come to exist through human political, economic, cultural, and social factors (Law, 1987).
To best understand and guide the development of technology, and consequently humanity, much
work has been done researching the social means by which technology comes to exist and,
inversely, the effects of technology on society.
Of course, the human drivers behind technology’s development and adoption are not
static. Social constructs like privacy, data ethics, safety standards, and social norms change over
time as society changes and, consequently, as technology changes. Therefore, technology must
be evaluated in the context of its creation and usage. This paper hopes to highlight this temporal
element in analyzing technology in the context of a dynamic society.
Google Glass is a device that society rejected not as a bad piece of technology, but rather
as a socio-technical artifact. The reality of Google Glass is that its engineers did not consciously
design the human-technological interaction that they were creating and failed to see how the
product would affect social interactions and perceptions of privacy. As a result, there was
backlash against the product leading to its failure. However, today’s attitudes surrounding
technology and privacy have further laxed; technological advances have shaped a sociotechnical
context where Glass may succeed today or in the future. This paper utilizes Actor Network
Theory to demonstrate how Google failed to coalesce a human, non-human network in
developing Glass, expanding on prior work to show how the conditions surrounding Glass have
evolved with time. To achieve the above conclusions, this paper analyzes media and primary
sources from the time of release of Glass, academic and retrospective journalism pertaining to
the failure of Glass, interviews with non-experts and experts about this technology, and current
Glass enthusiasts via the Google Glass subreddit.
Literature Review
In April 2013 Google began accepting applications for the public to purchase a pair of
smart glasses that Google believed was a major step in the direction of their dream “that
computers and the Internet will be accessible anywhere and we can ask them to do things without
lifting a finger” (Miller, 2013). This was the Explorer version of Google Glass, outfitted with a
small screen and camera, and connected to a smartphone and the internet over Bluetooth or Wifi
(Miller, 2013). Essentially a beta test for developers, the purpose of the “Explorer program [was]
to find out how people want to (and will) use Glass” (Topolsky, 2013). The expectations around
Google Glass were massive, with Business Insider (2013) expecting a $10.5 billion dollar
opportunity for Google as unit sales would increase and the price would decrease until Glass was
the next “ubiquitous” technology. However, the glasses failed spectacularly with media citing
that Google overpromised and underdelivered (Yoon, 2018). Of course, this does not tell the
entire story.
Many people will not know that Google Glass still exists in the form of Glass Enterprise.
Google rebranded the tech to sell to manufacturing, healthcare, and logistics businesses for a
workplace hands-off augmented reality computer (“Glass”, 2021). Similarly, Microsoft Hololens
allows a headset based industrial mixed reality solution (“Hololens”, 2021). So, if these
technologies have proven themselves in a commercial space, what went wrong in the public
setting? During Glass’s Explorer phase there was a slew of privacy concerns associated with the
fact that wearing Glass meant wielding a camera at all times. To some, Google Glass was a rare
example of people pushing back against big tech regarding privacy. People were kicked out of
bars because of the recording aspect, the NYT ran a front-page story about privacy concerns,
activists set up groups to push back against the product, and policies were implemented that
forbid people from taking pictures without consent (Eveleth, 2018). Kudina and Verbeek (2019)
explored how Glass mediated the value of privacy by analyzing YouTube comments from the
time of release. However, there is little consideration given to the temporal aspects of sociotechnical interaction. It is essential that Glass is examined, not only in the context of its release,
but also with respect to changing norms, human perceptions, and technologies. Without asking
these questions, we remain unprepared to answer whether a similar technology could succeed
today or in the future.
“Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value
of Privacy” by Olya Kudina and Paul-Peter Verbeek (2019) examines online discussions about
Google Glass, particularly comments on a YouTube video produced by Google, in order to
understand “how people articulate new meanings of the value of privacy.” This case study serves
as a demonstration of Verbeek’s own Theory of Technological Mediation, which allows a focus
on “the dynamics of the interaction between technologies and human values” as a way of
addressing the Collingridge Dilemma, which applied here says that when a technology is young
it is unknown how it will affect systems, and that by the time the morality surrounding
technology is clear, it is difficult to develop the already widespread technology.
According to mediation theory, engineers design not just products, but they design
human-technological interactions in the world. Technology acts as a mediator, shaping personal
experiences and objects while humans and tech are not separate, but affect each other in their
relations. Rather than speculating about the future, “it studies the dynamics of technomoral
change itself.” While Verbeek’s paper serves as a launch point for human perception around the
time of Glass’s release, and is drawn upon greatly in the below analysis, the data set is of course
not representative of today’s cultural technological landscape. Therefore, this paper hopes to
extend on this work in describing not just Glass’s initial rejection given its social context at the
time, but also inspect perceptions of the technology today.
Conceptual Frameworks and Research Methods
This paper draws mainly on Darryl Cressman’s (2009) overview of Actor Network
Theory and the following definitions are derived from his work unless otherwise cited. In Actor
Network Theory everything, both human and non-human can be viewed as both an actor and a
network. These actor networks are therefore sociotechnical in nature, and they are sometimes
referred to as heterogenous networks. A network is defined by the associations it describes;
therefore, power of the network and association are intertwined. Additionally, power and
meaning are not inherent to any single actor within a network, rather they are associative,
relational and contextual. When that actor becomes part of another network its associations
change, and as a result its power or meaning changes. Meaning is ascribed to actors with a
network contextually rather innately (Cressman, 2009).
Engineers in ANT practice heterogeneous engineering, assembling actor networks that
are both human and technical in nature. To understand how the world works, practitioners of
ANT must understand how economic, political, social, and technological understanding interact
with each other. In contrast to other STS theories, ANT is symmetrical in the influence of both
the technical and nontechnical (Cressman, 2009).
Technological innovation comes from the process in ANT known as translation. This is
the process by which both the social and technical actors are recruited into a network. This does
not happen at once, rather actors are recruited in a gradient as the network gradually becomes
more robust. In understanding the world through ANT, there is an emphasis on connections
rather than the individual, and these connections are not all equal (Cressman, 2009).
The conclusion of Actor Network Theory is that for a network to succeed, an engineer
must consider all actors human, nonhuman, technical, political, economic, social, etc. Engineers
are therefore world builders (Law, 1987), and recruiting actors to make a socially robust network
is the triumph of a network. Neglecting the social aspects, or encountering rogue actors, leads to
a failed network. It will be shown that this is exactly how Google failed as a network builder;
thus, the tools of ANT were chosen to explore this dynamic.
In addition to the academic papers cited and journalistic releases analyzed below, two
means of research were also applied. In order to gain a sense of how potential users today
perceive Google Glass or similar technology, interviews were conducted on a group of nonexperts and peers, as well as one industry expert, and enthusiasts of the technology were gauged
via posts on the Google Glass enthusiast subreddit “r/googleglass”.
The purpose of the interviews was not to poll a representative set of the opinions
surrounding Glass, rather to guide research and find some interesting perspectives surrounding
the technology and privacy today. Subjects were aged 22 to 57 and varied in occupation,
education and sex. All interviewees could be broadly incorporated in a target audience for
Google, as all of them consume smart technology already. The interviewees were asked what
they knew about Google Glass, then asked a set of questions about smart technology, ubiquitous
recording, privacy, and legality, and finally were asked to give their opinion on the viability of a
product like Glass coming back in the future. Interviewees’ knowledge regarding Glass ranged
from having never heard of the product, to familiarity, to an industry insider who markets
technology products for Apple and has worked with former Glass developers. It is important to
emphasize that these interviewees were not randomly selected and do not number great enough
to act as a focus group of sorts for the product, but they guided research and topic discovery as
well as offer a scope of perspectives in this area.
The second group considered are users of the subreddit “r/googleglass.” This is an
enthusiast forum dedicated to the technology. The studied materials consisted of the forum “Top,
This Year” as of March 2021, meaning that these posts were the most “upvoted,” or received the
most positive interactions on that forum for the year. These posts were chosen because they
represent the most popular current opinions and perceptions from the enthusiast community.
Analysis
In the case of Glass, Google serves as the network builder in assembling Glass, not just
the technology but also the human technical interactions. In ANT, technology and society are
dual, in that they exert influence on and shape each other. This is also articulated by Verbeek
through his Technological Mediation framework, claiming that Glass mediates privacy itself. In
the below section, media from the time of Glass’s release will be analyzed to see how the
associations of Glass with these rogue actors lead to the destabilization of the network all
together.
Moving chronologically, the first article analyzed is from February 2013, nearly two
months prior to the launch of the explorer version of Glass. A writer from The Verge met with
Google Glass lead designers for an early test of the product. The author of the article is, as
expected of a tech magazine writer, very optimistic about Google Glass noting the “tremendous
value and potential.” He praises the design, appearance, functionality and wearability of the
device but also posts an early warning sign about the fate of Glass:
“At one point during my time with Glass, we all went out to navigate to a nearby
Starbucks — the camera crew I’d brought with me came along. As soon as we got inside
however, the employees at Starbucks asked us to stop filming. Sure, no problem. But I
kept the Glass’ video recorder going, all the way through my order and getting my coffee.
Yes, you can see a light in the prism when the device is recording, but I got the
impression that most people had no idea what they were looking at. The cashier seemed
to be on the verge of asking me what I was wearing on my face, but the question never
came. He certainly never asked me to stop filming.” (Topolsky, 2013)
The author is keenly aware of the issues looming for Google Glass, saying in his own
words, “The privacy issue is going to be a big hurdle for Google with Glass”. When he brought
this up to the Glass designers he met with (product director Steve Lee and lead industrial
designer Isabelle Olsson). Their belief was that the explorer program was their way of
developing an etiquette surrounding Glass. The use and misuse of the program would be
monitored by Google and feedback would move the product forward. The author then remarks
“that’s not going to answer questions about what’s right and wrong to do with a camera that
doesn’t need to be held up to take a photo, and often won’t even be noticed by its owner’s
subjects. Will people get comfortable with that? Are they supposed to?” (Topolsky, 2013) From
an ANT perspective, privacy is not just a concept but rather a socio-technical actor existing in
their network. It is equally important for Google to consider how Glass allows people to record
the world and affect others notions of privacy. However, there is almost an apathy here in the
Explorer program. Google acknowledges that through the Explorer program they can develop an
etiquette around Glass, while recruiting people to their network, but without taking an active role
in designing a project that is socially harmonious, their network is unstable. As the author stated,
their tech fails to answer the questions about what people will do with a camera and if that’s
okay.
Google’s technological optimism, or potentially their naivete, comes through perhaps
strongest in an influential New York Times article from the time of release of Google Glass. This
article, titled “Google Glass Picks Up Early Signal: Keep Out” details the negative responses that
Google Glass was getting form various places around the country, and contrasts it with Google’s
reputation for being cavalier around privacy. The article quotes Google’s former CEO Eric
Schmidt in 2009 saying, “If you have something that you don’t want anyone to know, maybe you
shouldn’t be doing it in the first place” (Streitfield, 2013). It is clear that this policy permeates
through to their implementation of Glass, which potentially immortalizes the public realm
through ubiquitous recording, thus making potentially everything known to everyone. A
contributing law expert is quoted as saying “We are all now going to be both the paparazzi and
the paparazzi’s target” (Streitfield, 2013). Furthermore, the article reports that app developers
made photography with glass as discrete as simply winking in one application (Streitfield, 2013).
To many this makes Glass even more intrusive, and although it is unclear if Google would have
allowed a feature like this in their final release, the Explorer program was essential for gradually
recruiting public acceptance into the Glass network. Of course, they failed to do so.
The New York Times article also speaks on a Seattle bar that banned Glass. The owner of
the bar reported to Geekwire that Glass disrupted the private and seedy culture of his bar:
“People want to go there and be not known … and definitely don’t want to be secretly filmed or
videotaped and immediately put on the Internet” (Bishop, 2013). He also notes that “tech geeks”
from Amazon frequent the bar, and he doesn’t want them using Glass inside. This “tech geek
backlash” is another overarching theme regarding these reactionary articles. As one writer put it,
“Google’s core mistake was allowing geeky developers to become the face of Glass” (Constine,
2017). This made recruitment of a more privacy conscious group difficult for Google, since all of
a sudden there was an Us vs the “Glassholes” dynamic.
The Five Point owners, the geeky big tech employees, and its various bargoers represent
a key factor that must be considered when assembling a socially robust network. The
associations surrounding actors in a network are entirely variable and context dependent. Where
Google employees may look favorably on how Glass impacts social dynamics, the same is not
true of all society. The heterogenous engineer of ANT does engineering outside the lab creating a
network that is socially robust, while Google perhaps looks only inward on its own norms.
Kudina and Verbeek’s (2009) paper using Technological Mediation digs deeper into how
Google approached human behavior and Glass. Google called on the best judgement of its users
and published a list of dos and don’ts surrounding Glass and sat back to observe. The author
turns to gauging Glass’s mediation of privacy and social interaction via YouTube comments on
this list of dos and don’ts during the explorer phase of Glass. One conclusion is that “Glass
appears as a mediating boundary object between what commenters consider private even in the
most public places and what is violated when the device is introduced,” and to online
commenters “the privacy of forgetting [is] at stake with Glass.” As a designer, under
Technological Mediation, Google creates the mediations, and perhaps for Glass to succeed they
needed to be aware of what human interactions and perceptions were actually being designed.
This wraps into actor network theory nicely, since under ANT designers are responsible for
recruiting the actors that are both human and nonhuman to a sociotechnical network. The
designers are the builders of society, and Google failed in creating a network that is stable both
socially and technically. A few years later, SnapChat created the spectacles which were smart
glasses that allowed users to record footage and post to snapchat. However, without addressing
the social issues, the same fate occurred and this network too failed as reported by TechCrunch:
“Google Glass tainted the market with its ‘not sure if you’re recording me’ design. Even though
Snap put more obvious recording signal lights on Spectacles, people would still question you
about whether they were on camera. That not only made people uncomfortable being around
Spectacles, but made you feel like a bit of a creep just wearing them” (Constine, 2017).
This is not to say that there is no hope for augmented reality. Google Glass still exists in
the form of Glass Enterprise. Google rebranded the tech to sell to manufacturing and healthcare
businesses. In this space the context is completely different, and the expectation of privacy
spoken about by Verbeek does not exist in the same caliber. Privacy, under ANT, can be
considered an actor-network, since it is defined by humans and technological relations.
Therefore, it is also subject to the contextuality present in ANT and takes on a different meaning
in the workplace. There is already perhaps an expectation of being observed at work, at least to
some extent, and especially in manufacturing settings. Computer surveillance software, security
cameras, and sensors in manufacturing monitor people’s actions already in a way that would be
unacceptable in the private space. From an ANT perspective this clearly represents the idea of
variable meanings of actors in different networks. As a result, Google is able to recruit
companies and people to their enterprise network, where associations with privacy hold different
meanings. Strictly speaking, the technology for these products to exist was never the problem;
rather, Glass, people, and privacy take on different meanings in a business setting and the public
space.
While Glass may have been initially deemed unacceptable by society, human perceptions
are subject to change as a result of the technical landscape that permeates life around us. Much
has changed since 2013 when Glass was released. The privacy paradox states that individuals
may desire privacy, but are willing to exchange it for the benefits of technology (Eveleth, 2018).
What was once considered off limits could potentially now be acceptable, as technology has
shaped our views of what is wrong and right. Evidence of this lies in the still active Reddit group
dedicated to Google Glass. Users here are still developing software for Glass including the top
poster writing about hacking a version of Android onto Glass. There are many posts about
buying and selling the technology as well as discussing alternative smart glasses. The discussions
in the community consist of instructions on how to load firmware updates onto Glass as well as
loading custom apps such as Netflix which is popular among users on the subreddit.
Additionally, there are troubleshooting posts, guides, blogs and external repositories of apps
discussed, linked and posted on the forum. In stark contrast to the YouTube comments analyzed
by Verbeek, these users have embraced Glass and taken on the role of designers in an enthusiast
ecosystem. The general consensus here is also a longing or even optimism about new Google
Glass products, and that the original Glass was “ahead of its time”.
In conducted interviews, while the subjects varied in terms of their individual privacy
beliefs, no subject so far has said that Glass or a similar device should be banned. More
importantly, there was consensus that a product like Glass could or will exist in the future. One
expert who Apple contracts for marketing, said “if Apple released their version of Google Glass
in two years it will be everywhere.” Others, especially young people, saw no problem
whatsoever with the technology, even after having experts’ concerns explained. The justification
was that privacy is already so compromised by technology and governments. Nearly all
interviewees believed that if a product like this came out in the future, and was a commercial
success they would see no problem at all. Like the reddit users, these interviewees are a far cry
from the mainstream privacy worry in 2013.
In observing both Reddit posters and interviewees, one cannot determine for certain
whether Google Glass could reclaim success today. However, these examples show that the
meaning of actor networks can be derived from context that is time dependent as well. Glass is
not innately moral or immoral. Rather, Society’s expectations for privacy, and their morals
surrounding the subject, change with time and the influence of technology. In Actor Network
Theory this is the symmetry of humans affecting technology and technology impacting humans.
While Google failed to realize how these human factors played into their network originally,
perhaps today or in the near future privacy, as its own actor network, will evolve in such a way
that Glass can exist as a socially robust network.
Conclusion
Google Glass is an actor-network that dissolved, not as a result of its technology or any
specific actor, but rather because of the associations and context these human and nonhuman
actors take on in translating the network. In a public space, from the beginning Glass represented
a form of ubiquitous and secret recording, because there was the assumption that anybody at any
time could be caught on the glasses’ camera. Technology mediates human perception, and in this
case, Glass lends new meaning to what is considered private. In an Actor Network framework,
this is a demonstration of the symmetry of human and non-human artifacts influencing each
other, and without these considerations the network was bound to fail. Rather than design a
product that was both social and technical, Google maintained its cavalier approach to privacy,
not considering how people may have reacted to digitalizing the world’s eyesight.
Google employees and glass users at the time were optimistic about the future of the
product, believing that the product would come to be socially acceptable. This captures the idea
of association and context ascribing meaning in ANT. While Glass may have been acceptable in
Silicon Valley, it did not get the same reception in the main stream. Similarly, while Google
could not release this product to the public it has faced success in manufacturing, healthcare, and
logistic settings as have other augmented reality headsets. Again, here privacy and people’s
expectations take on a new meaning in the Google Glass Enterprise actor-network.
Much has changed since the release of Google Glass. It has become an expectation in
these times that users trade free services for their own personal data. We have all had our ideas
of privacy mediated by technology in this way. It may be possible then, that in the current year or
in the future a product like Glass will resurface, as it has done in the manufacturing space. Some
Reddit users, for example, have put Google Glass to use in their own lives, modifying and
distributing updated software for the glasses. The consensus on these forums is that Glass was
ahead of its time, and there is consensus among interviewed potential users that a product like
this could succeed. From an ANT perspective, again it is clear that the context of associations
within the network matter, rather than the individual parts, and these are all dynamic with respect
to time. If a product like Glass was to reach mainstream success, it would not be strictly the
technology, but rather the recruitment of the technology, the human users, and the social norms
of today or the future that yield a stable network.
While Google Glass as a high-profile product failure has been written about extensively,
there is little in the vein of an STS perspective, and that work focuses on the lens of society at
Glasses release date. The efforts of this paper are to provide an example of how the tools of ANT
can be used to not only analyze the building (and failure) of a technology, but also emphasize
how associations change with context, specifically time. These considerations are essential for
understanding not just the deployment of future technologies, but also the transient nature of
social norms.
References
Bishop, T. (2013, March 8). No Google Glasses allowed, declares Seattle dive bar. GeekWire.
https://www.geekwire.com/2013/google-glasses-allowed-declares-seattle-dive-bar/
Constine, J. (2017, October 28). Why Snapchat Spectacles failed. TechCrunch.
https://social.techcrunch.com/2017/10/28/why-snapchat-spectacles-failed/
Cressman, D. (2009). A Brief Overview of Actor-Network Theory: Punctualization, Heterogeneous
Engineering & Translation. https://summit.sfu.ca/item/13593
Eveleth, R (2018). Google Glass Wasn’t a Failure. It Raised Crucial Concerns. WIRED. (n.d.).
Retrieved November 1, 2020, from https://www.wired.com/story/google-glass-reasonableexpectation-of-privacy/
Glass. (n.d.). Glass. Retrieved November 2, 2020, from https://www.google.com/glass/start/
Insider, B. (n.d.). BI INTELLIGENCE FORECAST: Google Glass Will Be An $11 Billion Market By
2018. Business Insider. Retrieved November 1, 2020, from
https://www.businessinsider.com/google-glass-11-billion-market-by-2018-2013-5
Kudina, O., & Verbeek, P.-P. (2019). Ethics from Within: Google Glass, the Collingridge Dilemma,
and the Mediated Value of Privacy. Science, Technology, & Human Values, 44(2), 291–314.
https://doi.org/10.1177/0162243918793711
Law, J. (1987). On the Social Explanation of Technical Change: The Case of the Portuguese Maritime
Expansion. Technology and Culture, 28(2), 227–252. https://doi.org/10.2307/3105566
Microsoft HoloLens | Mixed Reality Technology for Business. (n.d.). Retrieved March 27, 2021, from
https://www.microsoft.com/en-us/hololens
Miller, C. C. (2013, February 21). Google Searches for Style. The New York Times.
https://www.nytimes.com/2013/02/21/technology/google-looks-to-make-its-computer-glassesstylish.html
Streitfeld, D. (2013, May 7). Google Glass Picks Up Early Signal: Keep Out. The New York Times.
https://www.nytimes.com/2013/05/07/technology/personaltech/google-glass-picks-up-earlysignal-keep-out.html
Topolsky, J. (2013, February 22). I used Google Glass: The future, but with monthly updates. The
Verge. https://www.theverge.com/2013/2/22/4013406/i-used-google-glass-its-the-future-withmonthly-updates |
Give your answer in a numbered list and give an explanation for each reason. Draw all information from the provided context and do not use any outside knowledge or references. | What are 3 reasons that iPSCs are a better approach for treating diabetes than ESCs? | Introducing pancreatic β cells, cultivated in vitro from pluripotent stem cells like embryonic stem cells (ESCs) or induced pluripotent stem cells (iPSCs), has been suggested as an alternative therapeutic approach for diabetes. The fundamental protocol for the in vitro differentiation of mouse embryonic stem (ES) cells into insulin-producing cells involves a three-step process. This includes (i) the formation of embryoid bodies, (ii) the spontaneous differentiation of embryoid bodies into progenitor cells representing ecto-, meso-, and endodermal lineages, and (iii) the induction of differentiation of early progenitors into the pancreatic lineage. The differentiated cells can be obtained in approximately 33 days. Transgenic expression of PDX-1 (pancreatic and duodenal homeobox 1) and Nkx6.1 (NK6 homeobox 1) has been demonstrated to prompt the differentiation of ESCs into endocrine cells that express insulin, somatostatin, and glucagon. Incorporating growth factors and extracellular matrix elements, including laminin, nicotinamide, and insulin, facilitates the process The induction of ESC-derived C-peptide/insulin-positive islet-like cell clusters, exhibiting insulin release upon glucose stimulation and expressing Pax4 (paired box gene), represents a significant advancement. Retinoic acid (RA) plays a crucial role in pancreatic development and is commonly employed to prompt pancreatic differentiation of ESCs. Direct addition of RA to activin A-induced human ESCs expressing CXCR4 leads to 95% of cells becoming positive for the pancreatic marker PDX-1H (pancreatic and duodenal homeobox 1). Animal studies have demonstrated that encapsulating human ESC-derived glucose-responsive mature β cells in alginate and transplanting them into a streptozotocin (STZ)-induced diabetic mouse model effectively regulates glycemic control. However, ethical concerns associated with ESCs have restricted their widespread clinical application. As an alternative, induced pluripotent stem cells have been proposed, possessing similar pluripotent characteristics to ESCs, thereby addressing ethical considerations. The primary focus of research on embryonic pancreas development is to enhance our comprehension of the processes involved in the generation of β-cells under normal conditions. This entails not only unravelling the intricate networks of signalling pathways and transcription factors that govern cell-autonomous differentiation but also acquiring insights into epithelial-mesenchymal interactions and the influence of factors secreted by adjacent tissues that guide endocrine and β-cell development. The overarching goal is that, with the accumulation of this comprehensive information, it will be possible to integrate and reconstruct the embryonic differentiation program. This, in turn, could facilitate the ex vivo generation of therapeutic β-cells for potential clinical applications.
The pancreas, a sophisticated endoderm-derived organ, encompasses diverse cell types serving both endocrine and exocrine functions. The exocrine component, constituting over 90–95% of the pancreatic mass, houses acinar cells responsible for secreting digestive enzymes such as lipases, carbohydrases, and amylases. Additionally, ductal cells facilitate the transport of these enzymes into the duodenum. Despite comprising only 1–2% of the pancreatic cell International Journal of Science and Research Archive, 2024, 11(01), 1917–1932 1921 population, hormone-secreting endocrine cells play a vital role in maintaining euglycemia. Within the pancreas, the islets of Langerhans host five distinct endocrine cell types, with the insulin-producing β-cell dominating and constituting 60–80% of the islet. In rodents, and to a lesser extent in humans, β-cells are typically positioned at the centre of the islets, surrounded by other endocrine cell types. The proportion and arrangement of these cells in the adult pancreas, along with the morphological changes during pancreas development, have been extensively studied for over a century. More recently, driven by the advancements in transgenic mouse technology, substantial insights have been gained into the molecular mechanisms governing pancreas organogenesis and epithelial cell differentiation.
During vertebrate embryogenesis, the three primary germ layers—ectoderm, mesoderm, and endoderm—form through extensive cell migration during gastrulation. In the mouse, a favoured mammalian model for embryogenesis studies, a thin cup-shaped sheet of embryonic endoderm evolves into the primitive gut tube, which can be subdivided into distinct regions along the anterior-posterior axis. Each region possesses distinct developmental potential, typically giving rise to various endodermal organs, including the liver, lung, stomach, and pancreas. Specification of the pancreatic field occurs around embryonic day 8.5 (E8.5) in mice and around 3 weeks in humans. Initially, three pancreatic primordia emerge from the definitive gut epithelium: the first from the dorsal side, followed by two primordia on the ventral side. Due to their independent origin and distinct locations along the primitive gut tube, differences arise in the surrounding environment, timing, specificity of signalling pathways, and gene expression profiles guiding these processes. Shortly after formation, one of the ventral buds regresses, while the remaining ventral bud eventually fuses with the dorsal evagination during the gut tube's rotation around E12.5.Subsequently, the pancreatic epithelium undergoes significant growth and branches into the surrounding mesenchyme. Although glucagon-producing cells and a few cells coexpressing insulin and glucagon can be detected as early as E9.5, fully differentiated β-cells and other hormone-secreting cells become prominently evident around E13. Termed the secondary transition, this stage witnesses a substantial increase in endocrine cell numbers through the proliferation and subsequent differentiation of pancreatic progenitors. The pancreas plays a pivotal role in systematically regulating glucose homeostasis, and its development involves a complex interplay of factors that influence stem cell differentiation into pancreatic progenitor cells, ultimately forming a fully functional organ. Consequently, most stem cell-based differentiation protocols aim to generate mature, single hormone-expressing, glucose-responsive human β-cells, drawing insights from studies on pancreatic development. Specific signals orchestrate the programming of insulin-producing β-cells. Transcription factors such as SRY (sex determining region Y)-box (Sox)17 and homeobox gene HB9 (Hlxb9) play crucial roles in endoderm formation during gastrulation. After foregut formation, fibroblast growth factor (FGF)-10, retinoic acid, SOX9, and hedgehog signalling pathways induce pancreatic development. Pancreatic specification and budding are driven by pancreas-specific transcription factors like pancreatic and duodenal homeobox 1 (Ptf-1a), pancreatic and duodenal homeobox 1, NK6 homeobox 1 (Nkx6.1), neurogenin-3 (Ngn-3), and mafA. These factors enable the endocrine formation and stimulate ISL LIM homeobox 1 (Isl-1), NK2 homeobox 2 (Nkx2.2), neurogenic differentiation factor (NeuroD), paired box gene (Pax)4, and Pax6 signalling, contributing to the formation of the islets of Langerhans. Throughout pancreatic development, transcription factors Sox17, hepatocyte nuclear factor (HNF)-6, and HNF-3beta (also known as forkhead box A2, Foxa2) are consistently expressed. Finally, FGF-10 and notch signaling-induced stem cell and pancreatic progenitor cell differentiation stimulate neogenesis, leading to the creation of β-cells.
1.1.3. Induced Pluripotent Stem Induced pluripotent stem cells (iPS) are adult cells that undergo genetic reprogramming in the laboratory to acquire characteristics similar to embryonic stem cells. iPS cells possess the remarkable ability to differentiate into nearly all specialized cell types found in the body, making them a versatile resource for generating new cells for various organs or tissues. This quality positions them as valuable tools for disease modelling, with researchers globally exploring their potential to develop cures for severe diseases. Notably, iPS cells offer the advantage of being autologous, meaning they originate from the individual's cells, thereby minimizing the risk of immunological reactions or rejection when transplanted tissues derived from iPS cells are used.
1.1.4. Pancreatic Regeneration Through Induced Pluripotent Stem Cell Human induced pluripotent stem cells (iPSCs) are generated by reprogramming human somatic cells to acquire pluripotent properties. These iPSCs have proven to be a valuable source for deriving glucose-responsive β-like cells. Despite the complexity of β cell development, creating an efficient and reproducible β cell differentiation protocol has been challenging. A potential solution involves initiating differentiation from human iPSC-derived pancreatic progenitor cells expressing PDX-1 and SOX9, which exhibit prolonged proliferation potential and the ability to generate C-peptidepositive β cells. Another effective differentiation protocol involves supplementing factors related to epidermal growth factor (EGF), transforming growth factor β (TGF-β), thyroid hormone, retinoic acid (RA) signalling, and γ-secretase inhibition. This approach results in β cells capable of inducing Ca2+ flux in response to glucose, packaging insulin into secretory granules, and secreting insulin. Due to their unlimited replicative capacity (self-renewal) and pluripotency, iPSCs offer a promising avenue for differentiating into pancreatic endocrine lineage cells, specifically functional insulinproducing pancreatic β cells. Research has consistently reported positive outcomes in various in vitro studies using protocols that emulate the mechanisms of in vivo pancreas development to guide iPSC differentiation into functional β cells. The first demonstration of generating functional β cells from induced pluripotent stem (iPS) cells was conducted by Tateishi and colleagues. Their study revealed that human dermal fibroblast-derived iPS cells, subjected to a four-stage serum-free in vitro differentiation process, could differentiate into functional islet-like clusters (ILCs) with mixed Cpeptide+ and glucagon+ cells. Throughout the differentiation, iPS cells underwent stage-specific morphological changes resembling those observed in human embryonic stem cells (ESCs). Functional analysis, employing quantitative reverse transcriptase polymerase chain reaction (RT-PCR) and immunostaining, showed that the differentiated iPS cells expressed stage-specific genes and antigen markers at each developmental stage. These stages included definitive endoderm (Foxa2 and Sox17), pancreatic endoderm (Pdx1), exocrine/endocrine cells (NKX6.1, Ptf1, and Insulin), and insulin-producing cells (Insulin, C-peptide, and glucagon), mirroring the pattern observed in human ESCs. | System Instruction: Give your answer in a numbered list and give an explanation for each reason. Draw all information from the provided context and do not use any outside knowledge or references.
Provided Text: Introducing pancreatic β cells, cultivated in vitro from pluripotent stem cells like embryonic stem cells (ESCs) or induced pluripotent stem cells (iPSCs), has been suggested as an alternative therapeutic approach for diabetes. The fundamental protocol for the in vitro differentiation of mouse embryonic stem (ES) cells into insulin-producing cells involves a three-step process. This includes (i) the formation of embryoid bodies, (ii) the spontaneous differentiation of embryoid bodies into progenitor cells representing ecto-, meso-, and endodermal lineages, and (iii) the induction of differentiation of early progenitors into the pancreatic lineage. The differentiated cells can be obtained in approximately 33 days. Transgenic expression of PDX-1 (pancreatic and duodenal homeobox 1) and Nkx6.1 (NK6 homeobox 1) has been demonstrated to prompt the differentiation of ESCs into endocrine cells that express insulin, somatostatin, and glucagon. Incorporating growth factors and extracellular matrix elements, including laminin, nicotinamide, and insulin, facilitates the process The induction of ESC-derived C-peptide/insulin-positive islet-like cell clusters, exhibiting insulin release upon glucose stimulation and expressing Pax4 (paired box gene), represents a significant advancement. Retinoic acid (RA) plays a crucial role in pancreatic development and is commonly employed to prompt pancreatic differentiation of ESCs. Direct addition of RA to activin A-induced human ESCs expressing CXCR4 leads to 95% of cells becoming positive for the pancreatic marker PDX-1H (pancreatic and duodenal homeobox 1). Animal studies have demonstrated that encapsulating human ESC-derived glucose-responsive mature β cells in alginate and transplanting them into a streptozotocin (STZ)-induced diabetic mouse model effectively regulates glycemic control. However, ethical concerns associated with ESCs have restricted their widespread clinical application. As an alternative, induced pluripotent stem cells have been proposed, possessing similar pluripotent characteristics to ESCs, thereby addressing ethical considerations. The primary focus of research on embryonic pancreas development is to enhance our comprehension of the processes involved in the generation of β-cells under normal conditions. This entails not only unravelling the intricate networks of signalling pathways and transcription factors that govern cell-autonomous differentiation but also acquiring insights into epithelial-mesenchymal interactions and the influence of factors secreted by adjacent tissues that guide endocrine and β-cell development. The overarching goal is that, with the accumulation of this comprehensive information, it will be possible to integrate and reconstruct the embryonic differentiation program. This, in turn, could facilitate the ex vivo generation of therapeutic β-cells for potential clinical applications.
The pancreas, a sophisticated endoderm-derived organ, encompasses diverse cell types serving both endocrine and exocrine functions. The exocrine component, constituting over 90–95% of the pancreatic mass, houses acinar cells responsible for secreting digestive enzymes such as lipases, carbohydrases, and amylases. Additionally, ductal cells facilitate the transport of these enzymes into the duodenum. Despite comprising only 1–2% of the pancreatic cell International Journal of Science and Research Archive, 2024, 11(01), 1917–1932 1921 population, hormone-secreting endocrine cells play a vital role in maintaining euglycemia. Within the pancreas, the islets of Langerhans host five distinct endocrine cell types, with the insulin-producing β-cell dominating and constituting 60–80% of the islet. In rodents, and to a lesser extent in humans, β-cells are typically positioned at the centre of the islets, surrounded by other endocrine cell types. The proportion and arrangement of these cells in the adult pancreas, along with the morphological changes during pancreas development, have been extensively studied for over a century. More recently, driven by the advancements in transgenic mouse technology, substantial insights have been gained into the molecular mechanisms governing pancreas organogenesis and epithelial cell differentiation.
During vertebrate embryogenesis, the three primary germ layers—ectoderm, mesoderm, and endoderm—form through extensive cell migration during gastrulation. In the mouse, a favoured mammalian model for embryogenesis studies, a thin cup-shaped sheet of embryonic endoderm evolves into the primitive gut tube, which can be subdivided into distinct regions along the anterior-posterior axis. Each region possesses distinct developmental potential, typically giving rise to various endodermal organs, including the liver, lung, stomach, and pancreas. Specification of the pancreatic field occurs around embryonic day 8.5 (E8.5) in mice and around 3 weeks in humans. Initially, three pancreatic primordia emerge from the definitive gut epithelium: the first from the dorsal side, followed by two primordia on the ventral side. Due to their independent origin and distinct locations along the primitive gut tube, differences arise in the surrounding environment, timing, specificity of signalling pathways, and gene expression profiles guiding these processes. Shortly after formation, one of the ventral buds regresses, while the remaining ventral bud eventually fuses with the dorsal evagination during the gut tube's rotation around E12.5.Subsequently, the pancreatic epithelium undergoes significant growth and branches into the surrounding mesenchyme. Although glucagon-producing cells and a few cells coexpressing insulin and glucagon can be detected as early as E9.5, fully differentiated β-cells and other hormone-secreting cells become prominently evident around E13. Termed the secondary transition, this stage witnesses a substantial increase in endocrine cell numbers through the proliferation and subsequent differentiation of pancreatic progenitors. The pancreas plays a pivotal role in systematically regulating glucose homeostasis, and its development involves a complex interplay of factors that influence stem cell differentiation into pancreatic progenitor cells, ultimately forming a fully functional organ. Consequently, most stem cell-based differentiation protocols aim to generate mature, single hormone-expressing, glucose-responsive human β-cells, drawing insights from studies on pancreatic development. Specific signals orchestrate the programming of insulin-producing β-cells. Transcription factors such as SRY (sex determining region Y)-box (Sox)17 and homeobox gene HB9 (Hlxb9) play crucial roles in endoderm formation during gastrulation. After foregut formation, fibroblast growth factor (FGF)-10, retinoic acid, SOX9, and hedgehog signalling pathways induce pancreatic development. Pancreatic specification and budding are driven by pancreas-specific transcription factors like pancreatic and duodenal homeobox 1 (Ptf-1a), pancreatic and duodenal homeobox 1, NK6 homeobox 1 (Nkx6.1), neurogenin-3 (Ngn-3), and mafA. These factors enable the endocrine formation and stimulate ISL LIM homeobox 1 (Isl-1), NK2 homeobox 2 (Nkx2.2), neurogenic differentiation factor (NeuroD), paired box gene (Pax)4, and Pax6 signalling, contributing to the formation of the islets of Langerhans. Throughout pancreatic development, transcription factors Sox17, hepatocyte nuclear factor (HNF)-6, and HNF-3beta (also known as forkhead box A2, Foxa2) are consistently expressed. Finally, FGF-10 and notch signaling-induced stem cell and pancreatic progenitor cell differentiation stimulate neogenesis, leading to the creation of β-cells.
1.1.3. Induced Pluripotent Stem Induced pluripotent stem cells (iPS) are adult cells that undergo genetic reprogramming in the laboratory to acquire characteristics similar to embryonic stem cells. iPS cells possess the remarkable ability to differentiate into nearly all specialized cell types found in the body, making them a versatile resource for generating new cells for various organs or tissues. This quality positions them as valuable tools for disease modelling, with researchers globally exploring their potential to develop cures for severe diseases. Notably, iPS cells offer the advantage of being autologous, meaning they originate from the individual's cells, thereby minimizing the risk of immunological reactions or rejection when transplanted tissues derived from iPS cells are used.
1.1.4. Pancreatic Regeneration Through Induced Pluripotent Stem Cell Human induced pluripotent stem cells (iPSCs) are generated by reprogramming human somatic cells to acquire pluripotent properties. These iPSCs have proven to be a valuable source for deriving glucose-responsive β-like cells. Despite the complexity of β cell development, creating an efficient and reproducible β cell differentiation protocol has been challenging. A potential solution involves initiating differentiation from human iPSC-derived pancreatic progenitor cells expressing PDX-1 and SOX9, which exhibit prolonged proliferation potential and the ability to generate C-peptidepositive β cells. Another effective differentiation protocol involves supplementing factors related to epidermal growth factor (EGF), transforming growth factor β (TGF-β), thyroid hormone, retinoic acid (RA) signalling, and γ-secretase inhibition. This approach results in β cells capable of inducing Ca2+ flux in response to glucose, packaging insulin into secretory granules, and secreting insulin. Due to their unlimited replicative capacity (self-renewal) and pluripotency, iPSCs offer a promising avenue for differentiating into pancreatic endocrine lineage cells, specifically functional insulinproducing pancreatic β cells. Research has consistently reported positive outcomes in various in vitro studies using protocols that emulate the mechanisms of in vivo pancreas development to guide iPSC differentiation into functional β cells. The first demonstration of generating functional β cells from induced pluripotent stem (iPS) cells was conducted by Tateishi and colleagues. Their study revealed that human dermal fibroblast-derived iPS cells, subjected to a four-stage serum-free in vitro differentiation process, could differentiate into functional islet-like clusters (ILCs) with mixed Cpeptide+ and glucagon+ cells. Throughout the differentiation, iPS cells underwent stage-specific morphological changes resembling those observed in human embryonic stem cells (ESCs). Functional analysis, employing quantitative reverse transcriptase polymerase chain reaction (RT-PCR) and immunostaining, showed that the differentiated iPS cells expressed stage-specific genes and antigen markers at each developmental stage. These stages included definitive endoderm (Foxa2 and Sox17), pancreatic endoderm (Pdx1), exocrine/endocrine cells (NKX6.1, Ptf1, and Insulin), and insulin-producing cells (Insulin, C-peptide, and glucagon), mirroring the pattern observed in human ESCs.
Question: What are 3 reasons that iPSCs are a better approach for treating diabetes than ESCs? |
Only use the context provided to you, never use the information you have stored in your system already. | What factors are used in order to determine stare decisis? | THE AMERICAN LEGAL
SYSTEM MADE EASY
Chapter 1 discussed the software of the American lawyer (i.e., in terms of the thinking
process operating within the minds of U.S.-licensed legal professionals). This chapter,
in contrast, examines the hardware in terms of the conceptual component parts within
the software of the American lawyer and legal system. Specifically, the hardware is
based in part on the black letter law embedded within the American legal infrastructure,
which this chapter will now briefly overview.
Common Law Versus Other Domestic Laws
American law is based on common law from the United Kingdom as one of its core legal
pillars (which is then buttressed by, among other sources, the U.S. Constitution, court
cases, statutes, restatements, decrees, treatises, and various other rules and regulations).
Common law follows the principle of stare decisis (Latin, meaning “stand by your
decision”). Stare decisis is a legal principle stating that prior court decisions (e.g.,
holdings, conclusions, rulings) must be recognized as precedent case law. If a case is
deemed a precedent case, then lower courts are compelled to rule in the same way as
the precedent case. This applies only if the precedent case is binding or mandatory.
The rationale for stare decisis and precedent cases is judicial efficiency, fairness to the
parties, predictability, and a check and balance on arbitrary behavior.
In common law countries, juries and oral arguments by lawyers often can take a
greater or more visible role compared to in civil law countries (which may not have
jury trials), in which the judge can play a more central and prominent role (of course,
exceptions can exist).
American Law 101
Examples of jurisdictions that use the common law system include the following:
• United Kingdom except Scotland
• United States except Louisiana
• Ireland
• Former British colony and/or Commonwealth territories/countries, including India except Goa, Australia, New Zealand, Singapore, and Canada except
Quebec
• Pakistan
• Bangladesh
In contrast, generally under civil law (derived from the French-German legal tradition), statutes and other similar legal sources represent relatively greater legal authority
than does case law. Under civil law, neither precedent cases nor stare decisis exist. The
rationale for this is greater judicial freedom to decide cases on a case-by-case basis.
Some people argue, however, that this system may come at the cost of less predictability and consistency regarding case law conclusions (with similar legal issues and/or
facts).
Examples of jurisdictions that use the civil law system include the following:
• Most European Union (EU) nations, including Germany and France where civil
law was derived, but not the United Kingdom, Ireland, or Cyprus
• Most of continental Latin America except Guyana and Belize
• Congo
• Azerbaijan
• Iraq
• Russia
• Turkey
• Egypt
• Madagascar
• Lebanon
• Switzerland
• Indonesia
• Vietnam
• Thailand
The factors used in determining whether to apply stare decisis include the following:
• Similarity of legal issue(s)/legal principle(s)
• Whether the precedent case was ruled on by a court recognized as a leading one
in the relevant subject area
The American Legal System Made Easy
• Whether the precedent case was well-reasoned and articulated (in the court’s
legal opinion)
• Whether the precedent case was issued from a court in the same jurisdiction
• Whether the precedent case was issued from a higher-level court
Although these factors are often considered to determine whether a case is a precedent case, thus representing a binding and mandatory legal source, a court may not be
required to follow:
• Secondary legal sources (i.e., nonprecedent cases, not related to the U.S. Constitution, and the like; see the following paragraph for further specifics)
• Cases that do not align with these factors to determine the precedential value of
a case
Two main types of legal sources exist in American law: primary and secondary.
1. Primary legal sources include the following:
• U.S. Constitution
• Statutes
• Rules, regulations, and orders
• Executive orders and proclamations
• Case law
2. Secondary legal sources include the following:
• Treatises
• Restatements
• Law review journals
• American Law Reports
• Hornbooks
• Legal encyclopedias
A general hierarchy also exists in which federal legal sources are weighed more
heavily than state legal sources:
A. Federal Legal Sources
• U.S. Constitution
• Federal statutes and treaties
• Federal rules and regulations
• Federal cases
B. State Legal Sources
• State constitutions
• State statutes
American Law 101
• State rules and regulations
• State law cases
From this list, two interesting points arise: (1) the U.S. Constitution represents the
supreme law of the land, and (2) a federal supremacy rule applies. This means that
federal sources are generally higher than state sources in the legal source hierarchy.
This is important to know for both academics and practitioners to determine what legal
source should be given greater weight relative to others, which can help in the legal
strategy process.
State Law
Although the United States is one country, from a legal perspective, each individual
state within it has a certain level of discretion to determine what types of laws best fit
that particular state’s set of circumstances. The concept of dualism, in which sources
of law exist dually at both the federal and state level, is based in part on the view that
decentralization of power is needed. The intent of dualism was to provide greater security that one central source of authority would not become overly powerful—as was the
case with England at the time of the founding of the United States.
Furthermore, as Chapter 6 discusses in greater detail regarding Constitutional
Law, the U.S. Constitution (the nation’s highest legal authority) has embedded in
it a concept known as the enumerated powers doctrine. In the enumerated powers
doctrine, the federal government has only those powers expressly conveyed to it
under the Constitution (under Article I, Section 8), with all other remaining powers
generally belonging to the states.
Thus, state laws are actually much more widely encompassing than many people
from non–common law countries would expect. With this in mind, each specific state’s
law can vary and be different from other state laws. Although diversity exists, many
state laws are based on certain standardized laws.
Examples of standardized laws that state law can be based on include the following:
• Restatements of law, which are used to provide clarity on certain law matters
• Prepared by the American Law Institute (ALI)
• Represents secondary (nonprimary) legal source/authority
• Uniform acts/Uniform codes, such as the Uniform Commercial Code, or UCC,
relating to contract law
• Drafted by the Uniform Law Commissioners
• Body of lawyers and other legal professionals whose objective is to standardize laws across the various U.S. states
• Offered as legal models, which each state can ratify in whole or in part
The American Legal System Made Easy
• Model penal code (MPC), relating to criminal law matters
• Prepared by the ALI, much like restatements
• Objective of updating and standardizing penal law across the various U.S.
states
• MPC represents what the ALI deems as the best rules for the U.S. penal
system
Much like the dual federal-state level of legal sources, a similar dual system of
federal-state court systems exists. Consistent with the principle of federalism, federal
courts rank higher in the judicial court hierarchy relative to state courts.
The Federal Court hierarchy (from highest to lowest) is as follows:
• U.S. Supreme Court
• Circuit courts
• District courts
Federal courts consider the following legal sources:
• Federal (nonstate) statutory issues
Supreme Court of
the United States
U.S. Courts of Appeal
(13 Circuit Courts)
U.S. District Courts
(94 Trial Courts)
State Supreme Courts
Intermediate Appellate Courts
(39 of 50 States)
State Trial Courts
(Across 50 States)
American Law 101
• Diversity cases, such as cases involving parties from two different states
• Cases in which the United States is a party as plaintiff or defendant
• Other cases as specified by law (e.g., admiralty, antitrust, maritime)
• Removal jurisdiction cases, in which the defendant requests the case to be heard
by a federal, rather than a state, court in the same district
The U.S. Supreme Court (USSC) is the highest court in the United States. The U.S.
Supreme Court generally hears cases based on appeal (when certiorari—or in plain
English, review—is granted to review the case). In other words, the USSC is only in
rare circumstances the court of first instance having original jurisdiction over a case.
Of course, exceptions exist when an issue is particularly urgent. For instance, the Bush
v. Gore (2000) case was heard by the USSC at first instance because its ruling could, in
effect, determine the outcome of the 2000 U.S. presidential election.
Below the USSC in judicial hierarchy are the federal circuit courts. The circuit
courts generally hear appeals from the lower district courts. Unlike the USSC, federal
circuit courts have original jurisdiction (court of first instance) over orders of certain
federal agencies. The federal circuit courts are divided geographically into 13 circuit
courts. Circuit courts numbered from 1 to 13 encompass all of the states (including
Hawaii), with an additional district for Washington D.C. (which is a federal territory,
not a U.S. state), and a federal circuit for certain specialized matters.
Many cases begin at the state court level and, if needed, are appealed to the federal level (except for the instances discussed previously), in particular, when a federal
(rather than a state) issue arises.
State Courts
Most state court systems replicate the federal court system. Some state courts have three
levels of hierarchy, whereas other state courts have two levels of hierarchy. Regardless,
each state court has its own rules of procedure and set of practices.
With a three-level state court system, the hierarchy is typically the following:
• State Supreme Court: Hears appeals from state intermediate court
• State court of appeals: Hears appeals from lower trial court
• State trial court: Conducts fact-finding as well as ruling on the legal issue(s)
presented
State courts usually can review almost any case, but exceptions exist, such as where
jurisdiction is precluded by (1) federal statute; (2) the U.S. Constitution; or (3) other
legal source, expressly (e.g., admiralty, patent, copyright) or implicitly (e.g., antitrust
damages and injunction).
The American Legal System Made Easy
American Judicial System
The United States has three branches of government: (1) the legislative branch (the
Congress, which is composed of the Senate and House of Representatives); (2) the
executive branch (including the U.S. President), and (3) the judicial branch (including
the USSC and other courts). The three branches of government are based on the concept of checks and balances, so that each branch of government does not become too
powerful relative to the other two branches.
Related terms are defined as follows:
• Congress: Bicameral institution that refers to the Senate and the House of Representatives
• House of Representatives:
• Referred to as the lower house (because the legislative process typically
begins here and then proceeds to the Senate).
• The number of Representatives is based on the population of each state
(thus, the larger and more populated states—such as California, Texas, and
New York—generally have more Representatives).
• House representatives are elected to two-year terms and can be reelected
continuously.
• Senate:
• Referred to as the higher chamber (because the Senate is the second chamber in the legislative process).
• Two senators are elected from each of the 50 states (regardless of a state’s
population).
• Senators are elected to six-year terms with the possibility of reelections.
• Government lawyers:
• Prosecutor: A government attorney who prepares and conducts the prosecution of the accused party
• District Attorney (DA) (or county prosecutor): A government prosecutor
representing a particular state
• United States (U.S.) Attorney: A federal prosecutor representing the United
States for certain federal districts
An example of checks and balances in practice could involve an impeachment proceeding against the executive branch. An attempt to impeach the U.S. President (executive branch), for instance, would involve the legislative branch placing a check and
balance on the executive branch by arguing, among other things, that certain actions
of the presidency allegedly violated the U.S. Constitution. The judicial branch (federal
American Law 101
courts) can serve as a check and balance if it decides to review the acts of the legislative
branch in terms of constitutionality (i.e., to determine whether an act by the legislative
branch allegedly violated the U.S. Constitution, which all three branches must abide
by). The federal courts can also review the actions of federal administrative agencies.
At the same time, the legislative branch (Congress) can review and overrule court precedent under its designated Congressional authority.
The American legal system can appear diverse and complex. With the overview
provided in this chapter, it is hoped that readers have a better understanding and greater
clarity regarding the hardware of American law. This understanding of the American
legal infrastructure will help, as the next chapters will fill in the landscape—section by
section—that will culminate into a panoramic primer of American law.
The reading and understanding of cases is important in most, if not all, jurisdictions in the world. The U.S. legal system, which is based on the common law system of
England, treats case law (law based on the interpretation of cases by the judiciary) as
especially important. This is based on the previously mentioned concept of stare decisis. Under stare decisis, lower courts often must (as opposed to can) rule and conclude
the case in a manner consistent with higher courts in the same jurisdiction regarding
previous cases with similar facts and issues (which links back to the IRAC legal thinking process covered earlier in Chapter 1).
The American legal system’s main rationale for stare decisis is consistency and
greater foreseeability of how similar cases may be concluded by the courts. However,
with benefits come drawbacks. With stare decisis, the drawback is less judicial discretion
afforded to the courts and judges in an effort to treat each dispute on a case-by-case basis.
What is considered as the drawback of the common law system under stare decisis is
often viewed as the benefit of the civil law system, in which stare decisis does not apply.
This thus gives greater judicial discretion to the courts, at the potential cost of inconsistent judicial conclusions even within the same jurisdiction.
So which domestic legal system among the two is better: common law or civil law?
When students and even practitioners pose this question, a common first answer is that
each system has both benefits and costs (as analyzed here), and it is incumbent upon
each jurisdiction to determine which system makes the most sense, all things considered.
The other answer is that an increasing convergent trend is now occurring, whereby legal
practitioners from both common and civil legal traditions often tend to think more similarly now than in the past, particularly in commercial transactions and dealings. This
convergence may be in part a result of globalization, technological advancements, and
students studying internationally—creating a greater exposure and knowledge base of
the common law tradition (as well as civil law and other domestic legal traditions, such
as Islamic law). (See the Appendices for further specifics on the American court system.)
To understand the American legal system, legal cases reflecting case law must
be understood in great detail. This is especially critical given the importance of stare
The American Legal System Made Easy
decisis and precedent cases in American law, as discussed earlier. Because of the
importance of case law and understanding cases, the next section provides a more
detailed glimpse into the main elements of a case within the American judicial system,
including a method of how to read and brief a case—a vital skill set for both the study
and practice of American law.
How to Read and Brief a Case
With the high level of importance given to stare decisis and precedent cases underlying
American law, a fundamental knowledge of how to understand and brief a U.S. case
is critically important. This is true as a law student as well as a law practitioner who
aspires to gain a greater understanding of American law.
To begin, most court decisions are published, both at the federal and state level.
The court issuing the opinion often has the discretion in deciding whether to publish an
opinion it has rendered.
Specific case elements exist in a typical case brief, which include the following:
• Case Name and its citation to find and/or reference the case
• Author of the Opinion (the Opinion is the court’s ruling/decision): Generally,
the person who authors a legal opinion is a judge or arbitrator (the concept and
role of arbitrators is discussed in greater detail in Chapter 10).
• Opinion, which generally includes:
• Case Facts and relevant procedural history of the case, such as past appeals
and rulings
• Court Conclusion, also referred to as the case’s holding
• Reasoning: Detailing the rationale, arguments, and other factors considered
by the court
• Disposition: Court action based on the court’s ruling/conclusion (e.g., reversed, affirmed, remanded.)
The case caption can be thought of as a title for a case. Example: Brown v. Board
of Education, 347 U.S. 483 (1954). The case caption includes the parties, case citation (court name, law book where the opinion is published), and year of the court’s
conclusion. In terms of formality of writing for a case caption, the party names to the
dispute are italicized and/or underlined (the example has the party names italicized).
The remaining case caption (e.g., citation/reporter details, year that the decision was
rendered, and other related details) generally is not italicized or underlined.
Reporters
Cases that are published are included in publications called reporters. Each reporter has
a volume number and page numbers. Some reporters are published by the state, while
American Law 101
some are published by commercial institutions. For the case citation/reporter relating
to the previous example, the case would be found in volume 347 of the United States
Reports on page 483.
Judicial Titles
The author of the court opinion, as mentioned, is typically a judge. In this case, the
judge, in his or her capacity as legal opinion author (for the majority or minority opinion), is written at the top of the legal opinion, as follows:
Example: “Hand, J.” refers to Judge Hand.
Example: “Holmes J.” is Justice Holmes.
Some jurisdictions use terms other than “judge,” albeit referring to the same judicial decision-rendering role:
Example: “Jackson, C.” refers to Chancellor Jackson.
Example: “Jackson, V.C.” refers to Vice-Chancellor Jackson.
Example: “Jackson, C.J.” refers to Chief Judge Jackson.
Party Names
In a civil (noncriminal) case, the party initiating the lawsuit is the plaintiff, and the
party defending against the plaintiff’s lawsuit is the defendant (not coincidentally,
the term “defendant” has the term “defend” embedded in it). In criminal (noncivil)
cases, the party initiating the lawsuit is referred to as the state (or similar terminology), because the interests of the state (or other relevantly named party initiating the
lawsuit) are presumed greater than one individual (such as by a plaintiff in a civil
law case).
The plaintiffs (or state) are usually the first party listed in the caption. For the previous caption example, Brown is the plaintiff at the initial stage (prior to an appeal, if
an appeal is rendered). If a case is heard on appeal (in which a case is heard for the
second time or more), then the party initiating the appeal is called the appellant. The
party defending against the appellant’s lawsuit on appeal is called the appellee. Thus,
as an example, if the Board of Education in the previous example appealed, then the
Board of Education would be the first named party in the caption of the appealed case
(rather than second, as was the case in the original lawsuit example).
The court’s conclusion or ruling is the court’s legal opinion and the rationale
given for reaching a particular judgment, finding, or conclusion. Underneath the
broad term of legal opinion, several specific subsets of opinions exist. A concurring
opinion is an opinion rendered by a judge who would have reached the same conclusion as the majority opinion, but for a different reason (i.e., same destination, but
would have chosen a different route to get to the destination). A plurality opinion is
The American Legal System Made Easy
an opinion agreed on by less than the majority of the judges (assuming a panel of
judges), but the opinion agrees with the majority opinion’s conclusion. A dissenting
opinion is an opinion by one or more judges who disagree with the majority opinion’s
conclusion.
The parties to a lawsuit (at the initial trial court level) include the following:
• Plaintiff: Party initiating the lawsuit
• Defendant: Party defending against the lawsuit (legal action by plaintiff)
• Counterclaimant: Defendant’s counterclaim against the plaintiff
• Cross-claimant: Defendant bringing a lawsuit against a third party, typically
with a view that the introduced third party was at least partially responsible/
liable for owed damages to plaintiff
• Third-party defendant: Party defending against a cross-claim for alleged damages owed to plaintiff
• Intervenor: Interested party participating in litigation with the court’s permission
The parties to a lawsuit (at the noninitial appellate court level) include the following:
• Appellant: Party appealing a lower court’s ruling (usually the unsuccessful party in the previous lawsuit)
• Appellee: Party defending against the appellant’s actions
• Petitioner: Party challenging action, usually in an agency context
• Respondent: Party defending against petitioner’s actions, usually in an agency
context
• Intervenor: Same as intervenor at the trial court level
• Amicus curiae (“friend of the court”): Party given court permission to participate in the case
• U.S. Solicitor: Government attorney representing the United States
The parties to a lawsuit (at the highest U.S. Supreme Court level) include the
following:
• Petitioner: Party seeking the Supreme Court’s review, arguing for the rejection
of the lower court’s decision
• Respondent: Party opposing the Supreme Court’s review, arguing that the lower
court’s decision does not warrant review, because the lower court’s conclusion
and rationale are legally valid
• Intervenor: Same as intervenor at the trial/appellate court level
• Amicus curiae: Same as at the appeals court level
• U.S. Solicitor: Government attorney representing the United States
American Law 101
Court Dispositions—General
• Order: Court resolution of a motion (filed by one of the parties)
• Affirmation: Court’s decision to uphold the lower court’s ruling
• Reversal: Court’s rejection of the lower court’s ruling
• Remand: Court order to return the case to the lower court (or agency) for further factual findings, or for other resolution in conformity with the appellate
court’s decision
• Vacate: Court rejection of the lower court’s ruling, with an order to set aside and
render the lower court’s ruling as null and void
• Modification: Court’s affirmation of part of the lower court’s decision, with an
ordered modification to the opinion
Court Dispositions—Appellate Courts
• En Banc Opinion:
• Represents an opinion by all members of the court, not just a certain number
(panel) of sitting judges, to hear a particular case
• Generally represents a rare exception rather than the norm
• Usually seen in issues of extreme importance
Court Disposition—Supreme Court
• Plurality Opinion:
• An opinion that more judges sign than any concurring opinion
• Does not constitute a majority opinion
• Does not have the force of precedent, because it is not a result of a majority
opinion
• Certiorari Granted:
• Grant of discretionary review by the U.S. Supreme Court (often considered
the exception rather than the norm because the Supreme Court is unable to
grant certiorari to most cases given its limited time and resources)
• Does not reverse or directly affect lower court rulings
• Certiorari Denied:
• U.S. Supreme Court’s decision to reject discretionary review of a particular
lower court ruling
• Does not generally have precedential effect
In most legal opinions, part of the court’s decision may include analysis and language that may not directly be necessary to reach the court’s resolution of the legal
issue. This part of the case is referred to as dictum. Dictum is not the court’s holding.
The American Legal System Made Easy
In other words, dictum is related, but separate from, the court’s holding. Given that
dictum is not part of a court’s holding, stare decisis does not apply. It may be difficult
to distinguish a court’s dictum from its holding. Still, dictum may be useful for future
cases, because it is, at times, a signal or hint of how the court (or at least a judge in the
court) may view a case in light of different legal issues or facts.
Summary
The American judicial system is based on British common law, which is then buttressed by, among other sources, the U.S. Constitution, court cases, statutes, restatements, decrees, treatises, and various other rules and regulations. The American legal
system is composed of the U.S. Supreme Court, federal courts, and state courts. Within
both federal and state courts, primary and secondary legal sources are considered. The
U.S. Supreme Court is the highest land of the law. It can grant certiorari to select cases
for various reasons, including whether the issue presented is urgent or of vital national
interest. Generally, however, a lawsuit begins in state courts and then, as needed, is
heard on appeal by federal (appellate-level) or state courts. Knowledge of the structure
of the American judicial system is then furthered by understanding how to write and
brief a law case, which is a vital skill set for law students and practitioners.
| Only use the context provided to you, never use the information you have stored in your system already.
What factors are used in order to determine stare decisis?
THE AMERICAN LEGAL
SYSTEM MADE EASY
Chapter 1 discussed the software of the American lawyer (i.e., in terms of the thinking
process operating within the minds of U.S.-licensed legal professionals). This chapter,
in contrast, examines the hardware in terms of the conceptual component parts within
the software of the American lawyer and legal system. Specifically, the hardware is
based in part on the black letter law embedded within the American legal infrastructure,
which this chapter will now briefly overview.
Common Law Versus Other Domestic Laws
American law is based on common law from the United Kingdom as one of its core legal
pillars (which is then buttressed by, among other sources, the U.S. Constitution, court
cases, statutes, restatements, decrees, treatises, and various other rules and regulations).
Common law follows the principle of stare decisis (Latin, meaning “stand by your
decision”). Stare decisis is a legal principle stating that prior court decisions (e.g.,
holdings, conclusions, rulings) must be recognized as precedent case law. If a case is
deemed a precedent case, then lower courts are compelled to rule in the same way as
the precedent case. This applies only if the precedent case is binding or mandatory.
The rationale for stare decisis and precedent cases is judicial efficiency, fairness to the
parties, predictability, and a check and balance on arbitrary behavior.
In common law countries, juries and oral arguments by lawyers often can take a
greater or more visible role compared to in civil law countries (which may not have
jury trials), in which the judge can play a more central and prominent role (of course,
exceptions can exist).
American Law 101
Examples of jurisdictions that use the common law system include the following:
• United Kingdom except Scotland
• United States except Louisiana
• Ireland
• Former British colony and/or Commonwealth territories/countries, including India except Goa, Australia, New Zealand, Singapore, and Canada except
Quebec
• Pakistan
• Bangladesh
In contrast, generally under civil law (derived from the French-German legal tradition), statutes and other similar legal sources represent relatively greater legal authority
than does case law. Under civil law, neither precedent cases nor stare decisis exist. The
rationale for this is greater judicial freedom to decide cases on a case-by-case basis.
Some people argue, however, that this system may come at the cost of less predictability and consistency regarding case law conclusions (with similar legal issues and/or
facts).
Examples of jurisdictions that use the civil law system include the following:
• Most European Union (EU) nations, including Germany and France where civil
law was derived, but not the United Kingdom, Ireland, or Cyprus
• Most of continental Latin America except Guyana and Belize
• Congo
• Azerbaijan
• Iraq
• Russia
• Turkey
• Egypt
• Madagascar
• Lebanon
• Switzerland
• Indonesia
• Vietnam
• Thailand
The factors used in determining whether to apply stare decisis include the following:
• Similarity of legal issue(s)/legal principle(s)
• Whether the precedent case was ruled on by a court recognized as a leading one
in the relevant subject area
The American Legal System Made Easy
• Whether the precedent case was well-reasoned and articulated (in the court’s
legal opinion)
• Whether the precedent case was issued from a court in the same jurisdiction
• Whether the precedent case was issued from a higher-level court
Although these factors are often considered to determine whether a case is a precedent case, thus representing a binding and mandatory legal source, a court may not be
required to follow:
• Secondary legal sources (i.e., nonprecedent cases, not related to the U.S. Constitution, and the like; see the following paragraph for further specifics)
• Cases that do not align with these factors to determine the precedential value of
a case
Two main types of legal sources exist in American law: primary and secondary.
1. Primary legal sources include the following:
• U.S. Constitution
• Statutes
• Rules, regulations, and orders
• Executive orders and proclamations
• Case law
2. Secondary legal sources include the following:
• Treatises
• Restatements
• Law review journals
• American Law Reports
• Hornbooks
• Legal encyclopedias
A general hierarchy also exists in which federal legal sources are weighed more
heavily than state legal sources:
A. Federal Legal Sources
• U.S. Constitution
• Federal statutes and treaties
• Federal rules and regulations
• Federal cases
B. State Legal Sources
• State constitutions
• State statutes
American Law 101
• State rules and regulations
• State law cases
From this list, two interesting points arise: (1) the U.S. Constitution represents the
supreme law of the land, and (2) a federal supremacy rule applies. This means that
federal sources are generally higher than state sources in the legal source hierarchy.
This is important to know for both academics and practitioners to determine what legal
source should be given greater weight relative to others, which can help in the legal
strategy process.
State Law
Although the United States is one country, from a legal perspective, each individual
state within it has a certain level of discretion to determine what types of laws best fit
that particular state’s set of circumstances. The concept of dualism, in which sources
of law exist dually at both the federal and state level, is based in part on the view that
decentralization of power is needed. The intent of dualism was to provide greater security that one central source of authority would not become overly powerful—as was the
case with England at the time of the founding of the United States.
Furthermore, as Chapter 6 discusses in greater detail regarding Constitutional
Law, the U.S. Constitution (the nation’s highest legal authority) has embedded in
it a concept known as the enumerated powers doctrine. In the enumerated powers
doctrine, the federal government has only those powers expressly conveyed to it
under the Constitution (under Article I, Section 8), with all other remaining powers
generally belonging to the states.
Thus, state laws are actually much more widely encompassing than many people
from non–common law countries would expect. With this in mind, each specific state’s
law can vary and be different from other state laws. Although diversity exists, many
state laws are based on certain standardized laws.
Examples of standardized laws that state law can be based on include the following:
• Restatements of law, which are used to provide clarity on certain law matters
• Prepared by the American Law Institute (ALI)
• Represents secondary (nonprimary) legal source/authority
• Uniform acts/Uniform codes, such as the Uniform Commercial Code, or UCC,
relating to contract law
• Drafted by the Uniform Law Commissioners
• Body of lawyers and other legal professionals whose objective is to standardize laws across the various U.S. states
• Offered as legal models, which each state can ratify in whole or in part
The American Legal System Made Easy
• Model penal code (MPC), relating to criminal law matters
• Prepared by the ALI, much like restatements
• Objective of updating and standardizing penal law across the various U.S.
states
• MPC represents what the ALI deems as the best rules for the U.S. penal
system
Much like the dual federal-state level of legal sources, a similar dual system of
federal-state court systems exists. Consistent with the principle of federalism, federal
courts rank higher in the judicial court hierarchy relative to state courts.
The Federal Court hierarchy (from highest to lowest) is as follows:
• U.S. Supreme Court
• Circuit courts
• District courts
Federal courts consider the following legal sources:
• Federal (nonstate) statutory issues
Supreme Court of
the United States
U.S. Courts of Appeal
(13 Circuit Courts)
U.S. District Courts
(94 Trial Courts)
State Supreme Courts
Intermediate Appellate Courts
(39 of 50 States)
State Trial Courts
(Across 50 States)
American Law 101
• Diversity cases, such as cases involving parties from two different states
• Cases in which the United States is a party as plaintiff or defendant
• Other cases as specified by law (e.g., admiralty, antitrust, maritime)
• Removal jurisdiction cases, in which the defendant requests the case to be heard
by a federal, rather than a state, court in the same district
The U.S. Supreme Court (USSC) is the highest court in the United States. The U.S.
Supreme Court generally hears cases based on appeal (when certiorari—or in plain
English, review—is granted to review the case). In other words, the USSC is only in
rare circumstances the court of first instance having original jurisdiction over a case.
Of course, exceptions exist when an issue is particularly urgent. For instance, the Bush
v. Gore (2000) case was heard by the USSC at first instance because its ruling could, in
effect, determine the outcome of the 2000 U.S. presidential election.
Below the USSC in judicial hierarchy are the federal circuit courts. The circuit
courts generally hear appeals from the lower district courts. Unlike the USSC, federal
circuit courts have original jurisdiction (court of first instance) over orders of certain
federal agencies. The federal circuit courts are divided geographically into 13 circuit
courts. Circuit courts numbered from 1 to 13 encompass all of the states (including
Hawaii), with an additional district for Washington D.C. (which is a federal territory,
not a U.S. state), and a federal circuit for certain specialized matters.
Many cases begin at the state court level and, if needed, are appealed to the federal level (except for the instances discussed previously), in particular, when a federal
(rather than a state) issue arises.
State Courts
Most state court systems replicate the federal court system. Some state courts have three
levels of hierarchy, whereas other state courts have two levels of hierarchy. Regardless,
each state court has its own rules of procedure and set of practices.
With a three-level state court system, the hierarchy is typically the following:
• State Supreme Court: Hears appeals from state intermediate court
• State court of appeals: Hears appeals from lower trial court
• State trial court: Conducts fact-finding as well as ruling on the legal issue(s)
presented
State courts usually can review almost any case, but exceptions exist, such as where
jurisdiction is precluded by (1) federal statute; (2) the U.S. Constitution; or (3) other
legal source, expressly (e.g., admiralty, patent, copyright) or implicitly (e.g., antitrust
damages and injunction).
The American Legal System Made Easy
American Judicial System
The United States has three branches of government: (1) the legislative branch (the
Congress, which is composed of the Senate and House of Representatives); (2) the
executive branch (including the U.S. President), and (3) the judicial branch (including
the USSC and other courts). The three branches of government are based on the concept of checks and balances, so that each branch of government does not become too
powerful relative to the other two branches.
Related terms are defined as follows:
• Congress: Bicameral institution that refers to the Senate and the House of Representatives
• House of Representatives:
• Referred to as the lower house (because the legislative process typically
begins here and then proceeds to the Senate).
• The number of Representatives is based on the population of each state
(thus, the larger and more populated states—such as California, Texas, and
New York—generally have more Representatives).
• House representatives are elected to two-year terms and can be reelected
continuously.
• Senate:
• Referred to as the higher chamber (because the Senate is the second chamber in the legislative process).
• Two senators are elected from each of the 50 states (regardless of a state’s
population).
• Senators are elected to six-year terms with the possibility of reelections.
• Government lawyers:
• Prosecutor: A government attorney who prepares and conducts the prosecution of the accused party
• District Attorney (DA) (or county prosecutor): A government prosecutor
representing a particular state
• United States (U.S.) Attorney: A federal prosecutor representing the United
States for certain federal districts
An example of checks and balances in practice could involve an impeachment proceeding against the executive branch. An attempt to impeach the U.S. President (executive branch), for instance, would involve the legislative branch placing a check and
balance on the executive branch by arguing, among other things, that certain actions
of the presidency allegedly violated the U.S. Constitution. The judicial branch (federal
American Law 101
courts) can serve as a check and balance if it decides to review the acts of the legislative
branch in terms of constitutionality (i.e., to determine whether an act by the legislative
branch allegedly violated the U.S. Constitution, which all three branches must abide
by). The federal courts can also review the actions of federal administrative agencies.
At the same time, the legislative branch (Congress) can review and overrule court precedent under its designated Congressional authority.
The American legal system can appear diverse and complex. With the overview
provided in this chapter, it is hoped that readers have a better understanding and greater
clarity regarding the hardware of American law. This understanding of the American
legal infrastructure will help, as the next chapters will fill in the landscape—section by
section—that will culminate into a panoramic primer of American law.
The reading and understanding of cases is important in most, if not all, jurisdictions in the world. The U.S. legal system, which is based on the common law system of
England, treats case law (law based on the interpretation of cases by the judiciary) as
especially important. This is based on the previously mentioned concept of stare decisis. Under stare decisis, lower courts often must (as opposed to can) rule and conclude
the case in a manner consistent with higher courts in the same jurisdiction regarding
previous cases with similar facts and issues (which links back to the IRAC legal thinking process covered earlier in Chapter 1).
The American legal system’s main rationale for stare decisis is consistency and
greater foreseeability of how similar cases may be concluded by the courts. However,
with benefits come drawbacks. With stare decisis, the drawback is less judicial discretion
afforded to the courts and judges in an effort to treat each dispute on a case-by-case basis.
What is considered as the drawback of the common law system under stare decisis is
often viewed as the benefit of the civil law system, in which stare decisis does not apply.
This thus gives greater judicial discretion to the courts, at the potential cost of inconsistent judicial conclusions even within the same jurisdiction.
So which domestic legal system among the two is better: common law or civil law?
When students and even practitioners pose this question, a common first answer is that
each system has both benefits and costs (as analyzed here), and it is incumbent upon
each jurisdiction to determine which system makes the most sense, all things considered.
The other answer is that an increasing convergent trend is now occurring, whereby legal
practitioners from both common and civil legal traditions often tend to think more similarly now than in the past, particularly in commercial transactions and dealings. This
convergence may be in part a result of globalization, technological advancements, and
students studying internationally—creating a greater exposure and knowledge base of
the common law tradition (as well as civil law and other domestic legal traditions, such
as Islamic law). (See the Appendices for further specifics on the American court system.)
To understand the American legal system, legal cases reflecting case law must
be understood in great detail. This is especially critical given the importance of stare
The American Legal System Made Easy
decisis and precedent cases in American law, as discussed earlier. Because of the
importance of case law and understanding cases, the next section provides a more
detailed glimpse into the main elements of a case within the American judicial system,
including a method of how to read and brief a case—a vital skill set for both the study
and practice of American law.
How to Read and Brief a Case
With the high level of importance given to stare decisis and precedent cases underlying
American law, a fundamental knowledge of how to understand and brief a U.S. case
is critically important. This is true as a law student as well as a law practitioner who
aspires to gain a greater understanding of American law.
To begin, most court decisions are published, both at the federal and state level.
The court issuing the opinion often has the discretion in deciding whether to publish an
opinion it has rendered.
Specific case elements exist in a typical case brief, which include the following:
• Case Name and its citation to find and/or reference the case
• Author of the Opinion (the Opinion is the court’s ruling/decision): Generally,
the person who authors a legal opinion is a judge or arbitrator (the concept and
role of arbitrators is discussed in greater detail in Chapter 10).
• Opinion, which generally includes:
• Case Facts and relevant procedural history of the case, such as past appeals
and rulings
• Court Conclusion, also referred to as the case’s holding
• Reasoning: Detailing the rationale, arguments, and other factors considered
by the court
• Disposition: Court action based on the court’s ruling/conclusion (e.g., reversed, affirmed, remanded.)
The case caption can be thought of as a title for a case. Example: Brown v. Board
of Education, 347 U.S. 483 (1954). The case caption includes the parties, case citation (court name, law book where the opinion is published), and year of the court’s
conclusion. In terms of formality of writing for a case caption, the party names to the
dispute are italicized and/or underlined (the example has the party names italicized).
The remaining case caption (e.g., citation/reporter details, year that the decision was
rendered, and other related details) generally is not italicized or underlined.
Reporters
Cases that are published are included in publications called reporters. Each reporter has
a volume number and page numbers. Some reporters are published by the state, while
American Law 101
some are published by commercial institutions. For the case citation/reporter relating
to the previous example, the case would be found in volume 347 of the United States
Reports on page 483.
Judicial Titles
The author of the court opinion, as mentioned, is typically a judge. In this case, the
judge, in his or her capacity as legal opinion author (for the majority or minority opinion), is written at the top of the legal opinion, as follows:
Example: “Hand, J.” refers to Judge Hand.
Example: “Holmes J.” is Justice Holmes.
Some jurisdictions use terms other than “judge,” albeit referring to the same judicial decision-rendering role:
Example: “Jackson, C.” refers to Chancellor Jackson.
Example: “Jackson, V.C.” refers to Vice-Chancellor Jackson.
Example: “Jackson, C.J.” refers to Chief Judge Jackson.
Party Names
In a civil (noncriminal) case, the party initiating the lawsuit is the plaintiff, and the
party defending against the plaintiff’s lawsuit is the defendant (not coincidentally,
the term “defendant” has the term “defend” embedded in it). In criminal (noncivil)
cases, the party initiating the lawsuit is referred to as the state (or similar terminology), because the interests of the state (or other relevantly named party initiating the
lawsuit) are presumed greater than one individual (such as by a plaintiff in a civil
law case).
The plaintiffs (or state) are usually the first party listed in the caption. For the previous caption example, Brown is the plaintiff at the initial stage (prior to an appeal, if
an appeal is rendered). If a case is heard on appeal (in which a case is heard for the
second time or more), then the party initiating the appeal is called the appellant. The
party defending against the appellant’s lawsuit on appeal is called the appellee. Thus,
as an example, if the Board of Education in the previous example appealed, then the
Board of Education would be the first named party in the caption of the appealed case
(rather than second, as was the case in the original lawsuit example).
The court’s conclusion or ruling is the court’s legal opinion and the rationale
given for reaching a particular judgment, finding, or conclusion. Underneath the
broad term of legal opinion, several specific subsets of opinions exist. A concurring
opinion is an opinion rendered by a judge who would have reached the same conclusion as the majority opinion, but for a different reason (i.e., same destination, but
would have chosen a different route to get to the destination). A plurality opinion is
The American Legal System Made Easy
an opinion agreed on by less than the majority of the judges (assuming a panel of
judges), but the opinion agrees with the majority opinion’s conclusion. A dissenting
opinion is an opinion by one or more judges who disagree with the majority opinion’s
conclusion.
The parties to a lawsuit (at the initial trial court level) include the following:
• Plaintiff: Party initiating the lawsuit
• Defendant: Party defending against the lawsuit (legal action by plaintiff)
• Counterclaimant: Defendant’s counterclaim against the plaintiff
• Cross-claimant: Defendant bringing a lawsuit against a third party, typically
with a view that the introduced third party was at least partially responsible/
liable for owed damages to plaintiff
• Third-party defendant: Party defending against a cross-claim for alleged damages owed to plaintiff
• Intervenor: Interested party participating in litigation with the court’s permission
The parties to a lawsuit (at the noninitial appellate court level) include the following:
• Appellant: Party appealing a lower court’s ruling (usually the unsuccessful party in the previous lawsuit)
• Appellee: Party defending against the appellant’s actions
• Petitioner: Party challenging action, usually in an agency context
• Respondent: Party defending against petitioner’s actions, usually in an agency
context
• Intervenor: Same as intervenor at the trial court level
• Amicus curiae (“friend of the court”): Party given court permission to participate in the case
• U.S. Solicitor: Government attorney representing the United States
The parties to a lawsuit (at the highest U.S. Supreme Court level) include the
following:
• Petitioner: Party seeking the Supreme Court’s review, arguing for the rejection
of the lower court’s decision
• Respondent: Party opposing the Supreme Court’s review, arguing that the lower
court’s decision does not warrant review, because the lower court’s conclusion
and rationale are legally valid
• Intervenor: Same as intervenor at the trial/appellate court level
• Amicus curiae: Same as at the appeals court level
• U.S. Solicitor: Government attorney representing the United States
American Law 101
Court Dispositions—General
• Order: Court resolution of a motion (filed by one of the parties)
• Affirmation: Court’s decision to uphold the lower court’s ruling
• Reversal: Court’s rejection of the lower court’s ruling
• Remand: Court order to return the case to the lower court (or agency) for further factual findings, or for other resolution in conformity with the appellate
court’s decision
• Vacate: Court rejection of the lower court’s ruling, with an order to set aside and
render the lower court’s ruling as null and void
• Modification: Court’s affirmation of part of the lower court’s decision, with an
ordered modification to the opinion
Court Dispositions—Appellate Courts
• En Banc Opinion:
• Represents an opinion by all members of the court, not just a certain number
(panel) of sitting judges, to hear a particular case
• Generally represents a rare exception rather than the norm
• Usually seen in issues of extreme importance
Court Disposition—Supreme Court
• Plurality Opinion:
• An opinion that more judges sign than any concurring opinion
• Does not constitute a majority opinion
• Does not have the force of precedent, because it is not a result of a majority
opinion
• Certiorari Granted:
• Grant of discretionary review by the U.S. Supreme Court (often considered
the exception rather than the norm because the Supreme Court is unable to
grant certiorari to most cases given its limited time and resources)
• Does not reverse or directly affect lower court rulings
• Certiorari Denied:
• U.S. Supreme Court’s decision to reject discretionary review of a particular
lower court ruling
• Does not generally have precedential effect
In most legal opinions, part of the court’s decision may include analysis and language that may not directly be necessary to reach the court’s resolution of the legal
issue. This part of the case is referred to as dictum. Dictum is not the court’s holding.
The American Legal System Made Easy
In other words, dictum is related, but separate from, the court’s holding. Given that
dictum is not part of a court’s holding, stare decisis does not apply. It may be difficult
to distinguish a court’s dictum from its holding. Still, dictum may be useful for future
cases, because it is, at times, a signal or hint of how the court (or at least a judge in the
court) may view a case in light of different legal issues or facts.
Summary
The American judicial system is based on British common law, which is then buttressed by, among other sources, the U.S. Constitution, court cases, statutes, restatements, decrees, treatises, and various other rules and regulations. The American legal
system is composed of the U.S. Supreme Court, federal courts, and state courts. Within
both federal and state courts, primary and secondary legal sources are considered. The
U.S. Supreme Court is the highest land of the law. It can grant certiorari to select cases
for various reasons, including whether the issue presented is urgent or of vital national
interest. Generally, however, a lawsuit begins in state courts and then, as needed, is
heard on appeal by federal (appellate-level) or state courts. Knowledge of the structure
of the American judicial system is then furthered by understanding how to write and
brief a law case, which is a vital skill set for law students and practitioners.
|
You can only answer using the information I am giving you. Make it sound like a dictionary definition. Make sure you are only use your own words and do copy any words or phrases from the context. | If I don't mention sunscreen in the label for my UV lip balm, then can it even be a cosmeceutical? | Context: The FFDCA defines a “drug” in part as “articles intended for use in the diagnosis, cure,
mitigation, treatment, or prevention of disease”; articles “(other than food) intended to affect the
structure or any function of the body”; and “articles intended for use as a component” of such
drugs.15
Drug manufacturers must comply with Current Good Manufacturing Practices (CGMP) rules for
drugs.
16 Failure to comply will cause a drug to be considered adulterated.17 Drug manufacturers
are required to register their facilities,
18 list their drug products with the agency,
19 and report
adverse events to FDA, among other requirements.
20
Unlike cosmetics and their ingredients (with the exception of color additives), drugs are subject to
FDA approval before entering interstate commerce. Drugs must either (1) receive the agency’s
premarket approval under a new drug application (NDA), or an abbreviated NDA (ANDA),21 in
the case of a generic drug, or (2) conform to a set of FDA requirements known as a monograph.22
Monographs govern the manufacture and marketing of most over-the-counter (OTC) drugs and
specify the conditions under which OTC drugs in a particular category (such as antidandruff
shampoos or antiperspirants) will be considered generally recognized as safe and effective
(GRASE).
23 Monographs also indicate how OTC drugs must be labeled so they are not deemed
misbranded.24
Although the term “cosmeceutical” has been used to refer to combination cosmetic/drug products,
such products have no statutory or regulatory definition.25 Historically, FDA has indicated that
cosmetic/drug combinations are subject to FDA’s regulations for both cosmetics and drugs.26
Determining whether a cosmetic is also a drug, and therefore subject to the additional statutory
requirements that apply to drugs, depends on the distributor’s claims regarding the drug’s intent
or intended use.27 A product’s intended use may be established in several ways, such as claims on
the label or in advertising or promotional materials, customer perception of the product, and the
inclusion of ingredients that cause the product to be considered a drug because of a known
therapeutic use.28 For example, if a lipstick (a cosmetic) contains sunscreen (a drug), historically,
the mere inclusion of the term “sunscreen” in the product’s labeling required the product to be
regulated as a drug as well as a cosmetic.
29 The text box below provides examples of other
cosmetic/drug combinations and compares cosmetic and drug classifications.30
Prior to the enactment of the Federal Food, Drug, and Cosmetic Act (FFDCA) in 1938, cosmetics
were not regulated by the federal government.
31 Instead, they were regulated under a collection of
state laws that had been enacted to regulate food and drugs.32 At that time, multiple “cosmetics
and drugs were made from the same natural materials” and often the “laws did not include
explicit definitions of the products regulated.”33 Following several incidents in which cosmetics
were allegedly the cause of serious health problems, as well as industry concerns about states
enacting their own laws, provisions were included in the FFDCA that prohibited the sale of
adulterated or misbranded cosmetics in interstate commerce.34 The FFDCA also established
uniform regulation of FDA-regulated cosmetic products nationwide.
35 However, state laws
regarding cosmetics regulation have continued to evolve since FFDCA’s passage, with some
states implementing stricter measures than others. | Context: The FFDCA defines a “drug” in part as “articles intended for use in the diagnosis, cure,
mitigation, treatment, or prevention of disease”; articles “(other than food) intended to affect the
structure or any function of the body”; and “articles intended for use as a component” of such
drugs.15
Drug manufacturers must comply with Current Good Manufacturing Practices (CGMP) rules for
drugs.
16 Failure to comply will cause a drug to be considered adulterated.17 Drug manufacturers
are required to register their facilities,
18 list their drug products with the agency,
19 and report
adverse events to FDA, among other requirements.
20
Unlike cosmetics and their ingredients (with the exception of color additives), drugs are subject to
FDA approval before entering interstate commerce. Drugs must either (1) receive the agency’s
premarket approval under a new drug application (NDA), or an abbreviated NDA (ANDA),21 in
the case of a generic drug, or (2) conform to a set of FDA requirements known as a monograph.22
Monographs govern the manufacture and marketing of most over-the-counter (OTC) drugs and
specify the conditions under which OTC drugs in a particular category (such as antidandruff
shampoos or antiperspirants) will be considered generally recognized as safe and effective
(GRASE).
23 Monographs also indicate how OTC drugs must be labeled so they are not deemed
misbranded.24
Although the term “cosmeceutical” has been used to refer to combination cosmetic/drug products,
such products have no statutory or regulatory definition.25 Historically, FDA has indicated that
cosmetic/drug combinations are subject to FDA’s regulations for both cosmetics and drugs.26
Determining whether a cosmetic is also a drug, and therefore subject to the additional statutory
requirements that apply to drugs, depends on the distributor’s claims regarding the drug’s intent
or intended use.27 A product’s intended use may be established in several ways, such as claims on
the label or in advertising or promotional materials, customer perception of the product, and the
inclusion of ingredients that cause the product to be considered a drug because of a known
therapeutic use.28 For example, if a lipstick (a cosmetic) contains sunscreen (a drug), historically,
the mere inclusion of the term “sunscreen” in the product’s labeling required the product to be
regulated as a drug as well as a cosmetic.
29 The text box below provides examples of other
cosmetic/drug combinations and compares cosmetic and drug classifications.30
Prior to the enactment of the Federal Food, Drug, and Cosmetic Act (FFDCA) in 1938, cosmetics
were not regulated by the federal government.
31 Instead, they were regulated under a collection of
state laws that had been enacted to regulate food and drugs.32 At that time, multiple “cosmetics
and drugs were made from the same natural materials” and often the “laws did not include
explicit definitions of the products regulated.”33 Following several incidents in which cosmetics
were allegedly the cause of serious health problems, as well as industry concerns about states
enacting their own laws, provisions were included in the FFDCA that prohibited the sale of
adulterated or misbranded cosmetics in interstate commerce.34 The FFDCA also established
uniform regulation of FDA-regulated cosmetic products nationwide.
35 However, state laws
regarding cosmetics regulation have continued to evolve since FFDCA’s passage, with some
states implementing stricter measures than others.
System instruction: You can only answer using the information I am giving you Make it sound like a dictionary definition. Make sure you are only use your own words and do copy any words or phrases from the context.
what I want to know: If I don't mention sunscreen in the label for my UV lip balm, then can it even be a cosmeceutical? |
Provide the answer based solely on the document provided. The answer should be in complete sentences. | According to Warren Buffet, when is the best time to invest in the stock market? | **Best time to invest in stock market**
The Colombo stock market has gone up by over 1,000 points (more than 20%) during the last few weeks. With this growth, a large number of investors are either trying to enter the market or trying to maximize profits from their existing investments. In order to assist them in their investment decisions, this week we will discuss a topic that most investors ask. Is there ever a good time to invest in the stock market? This question is frequently asked by investors, and for good reason, as no one wants to invest in the stock market only to see it fall the following day or even the following week.
Is there a right time to invest in stock market?
Is there a right time to invest in the stock market? That’s the magic question people have asked for as long as the stock market has been around. The simplest answer is that there is no right time to invest in the stock market. But it may seem as if some people have figured it out, such as billionaires like Warren Buffet who seem to always know when to invest, how much to invest and where to put their money. But, investors like him consider many more factors that have less to do with guessing the ‘right time’ and more to do with trying to predict how the stock will do based on recent reports and announcements by the company. Even then, they could be wrong. Many investors buy into and sell out of the market more frequently than they should. They are trying to ‘time’ the market. If you have never heard of this term before, it is described as trying to pick when the stock market has hit a top or a bottom and then buying into or selling out of the market accordingly. For example, if you are predicting that the market has hit the peak of the cycle, then you sell out of your holdings because the market has nowhere to go but down. Conversely, if you think the market has bottomed, meaning it won’t go any lower, you invest your money, since the market can only go up. Many smart investors try to predict how stocks and the overall stock market will behave and try to invest according to what they believe will happen. Even though they may predict the market right nine out of 10 times, they will still get it wrong that 10th time and it will cost them money, either because they invested in the wrong stock, or didn’t invest in a stock that sky rocketed to the top. It’s extremely difficult to predict how stocks or the stock market will do. Although, it is possible to predict certain trends because they are more obvious than many of the other subtler factors that can determine how well a stock does. There are people out there who claim to know exactly when to invest in the market. And a lot of people actually believe them because of what they see. But the fact is usually that these people invest in many different sectors of the stock market and when they see success in one sector, they only share that success which makes it seem like they know what they are talking about all the time. This isn’t actually a scam (although there are scams like this), but it’s more of the person hoping his or her research pays off and more often than not, it does. Just like people, you have companies with websites claiming to know which stocks will go up in price. And just like the people claiming to be stock market whisperers, these companies do extensive research which gives them hints about which companies will go up and which will go down. Then they share the information with the public, most of the time for a fee.
Is there a wrong time to invest in stock market?
Unfortunately, it seems that there is a wrong time. Most people have the tendency to invest at the wrong time. This is where the old adage of “buy low and sell high” comes into play. A smart investor waits for the stock to go low so he can buy low and sell high. That’s why billionaire investor Warren Buffet says, “Be greedy when others are fearful and be fearful when others are
greedy.” In other words, don’t completely follow the crowd and don’t be afraid to invest when you see an idea and everyone else is scared. It may run contrary to common thought, but smart investors across the globe see the best time to invest in the stock market when its performing its worst. When the stock market sinks or stalls, it is a buyers market. This is simply due to the fact that stocks are fluid forms of value; they change in worth often and sometimes drastically. When the economy starts to underperform, people tend to sell of their investments. It is an obvious response to people seeing their stock portfolio values go lower and lower. These mass pull-outs of investments cause the overall market to go into panic mode, dropping prices for stocks across the board. So, what does a wise investor with skilled investing strategies do in this situation? Buy! But of course, there are other factors at play such as market conditions, currency trading and aspects specific to a particular stock should also be taken into consideration when buying stocks. However, if you have the cash in hand to buy into stocks while they are undervalued due to market conditions, you can make some excellent investments. But there is no perfect time of day, hour or date to buy stocks. Timing stock buys is also based on other mitigating factors. Most investors however, do the opposite and buy high because they believe it’ll keep going higher. We see in practice, most people will only seek financial advice when the market is ‘good’, which ironically is not the best time to buy. Financial advisers who could only earn a commission selling investment products will tell their clients to buy despite it being the worst time to do so. Hardly anyone would seek advice from financial advisers when times are bad. In fact, many financial advisers themselves would recommend ‘safer’ products when actually it is the most viable time to enter into equity markets. Buying in a down market results in ‘cost averaging’, which means that you have a greater opportunity to gather large gains in the future. However, there is more to understanding when to buy stocks than simply ‘buy low, sell high’ or ‘buy in a down market’. The following are tips to help you decide when to buy stocks in order to maximize your future returns.
Tips on when to buy
Research about the fees that are associated with buying and selling stocks. These fees directly eat up your profits. Because of this, it is often beneficial to buy stocks in bulk and hold for
awhile rather than buying and selling rapidly. Know the company. Even if a stock is at a historically low price, you may not want to buy. Consider whether a rebound is expected and if so, what time frame this will require. You want to purchase stocks in a healthy company that will see future returns, not one that is on a fatal path downward. First, be sure that you are well-educated. Do your own research: Ask other investors, try to gather information from the regulator, publications and articles and by speaking to persons in the industry about the company, the industry and any fees you may incur from purchasing stocks. Know the industry. Selection of the market leader and the industries is critical. Trust your gut. Money, including investments is tied to emotions. Follow research and advice, also trust your instincts. Make decisions so that you will be able to sleep well at night. No one likes to lose money in an investment. Therefore, perhaps more complicated than simply buying a stock is the process of selling stock. Stock is easy to sell. Simply contacting your broker or utilizing the website of your online stockbroker can effect this transaction for you in minutes. Its not the act of selling stock, but rather timing stock sales to maximize profits where the need for precision lies. There is no such thing as the best time to sell stock when speaking of the hours in a day. The best time to sell stock is pertinent to each investor, the market state and the stock in question. Certainly, the best answer to when is the best time to sell a stock is to be selling stock before it declines in value. This in theory is nice: Make the most money one can on a stock or bond and get out and sell. However, in reality well timing stock sales takes practice, diligence and at times a lot of patience.
Most investors fail to make basic criteria before investing: Profit goals. When investing in a stock, one should establish a set amount of profit to make on a stock. When this limit is reached, selling stock should not be a thought, but rather an act. For example, to purchase stock in Company X for the price of Rs. 10 per share at its current trading value establishes your starting point. Say you set your profit goals for this particular stock at 30% or a Rs.3.00 increase in stock price- a healthy return on any stock investment. So, when the stock reaches Rs.13, you have reached your profit goal for this stock and you should sell. Walking away with 30% gain on your investment is excellent and far better than your money would have earned in near any other place. The average investor who loses money, or simply does not maximize the amount of money they could have made buying and selling stocks usually falls into this pitfall: Not selling stocks. Many investors watch their stocks soar up and then unable to contemplate their stock no longer increasing in value, hold on to as it falls. This is the most common problem with investors timing stock sales. They simply cannot let go of their stocks and therefore follow them all the way down.
| [Text]
=======
**Best time to invest in stock market**
The Colombo stock market has gone up by over 1,000 points (more than 20%) during the last few weeks. With this growth, a large number of investors are either trying to enter the market or trying to maximize profits from their existing investments. In order to assist them in their investment decisions, this week we will discuss a topic that most investors ask. Is there ever a good time to invest in the stock market? This question is frequently asked by investors, and for good reason, as no one wants to invest in the stock market only to see it fall the following day or even the following week.
Is there a right time to invest in stock market?
Is there a right time to invest in the stock market? That’s the magic question people have asked for as long as the stock market has been around. The simplest answer is that there is no right time to invest in the stock market. But it may seem as if some people have figured it out, such as billionaires like Warren Buffet who seem to always know when to invest, how much to invest and where to put their money. But, investors like him consider many more factors that have less to do with guessing the ‘right time’ and more to do with trying to predict how the stock will do based on recent reports and announcements by the company. Even then, they could be wrong. Many investors buy into and sell out of the market more frequently than they should. They are trying to ‘time’ the market. If you have never heard of this term before, it is described as trying to pick when the stock market has hit a top or a bottom and then buying into or selling out of the market accordingly. For example, if you are predicting that the market has hit the peak of the cycle, then you sell out of your holdings because the market has nowhere to go but down. Conversely, if you think the market has bottomed, meaning it won’t go any lower, you invest your money, since the market can only go up. Many smart investors try to predict how stocks and the overall stock market will behave and try to invest according to what they believe will happen. Even though they may predict the market right nine out of 10 times, they will still get it wrong that 10th time and it will cost them money, either because they invested in the wrong stock, or didn’t invest in a stock that sky rocketed to the top. It’s extremely difficult to predict how stocks or the stock market will do. Although, it is possible to predict certain trends because they are more obvious than many of the other subtler factors that can determine how well a stock does. There are people out there who claim to know exactly when to invest in the market. And a lot of people actually believe them because of what they see. But the fact is usually that these people invest in many different sectors of the stock market and when they see success in one sector, they only share that success which makes it seem like they know what they are talking about all the time. This isn’t actually a scam (although there are scams like this), but it’s more of the person hoping his or her research pays off and more often than not, it does. Just like people, you have companies with websites claiming to know which stocks will go up in price. And just like the people claiming to be stock market whisperers, these companies do extensive research which gives them hints about which companies will go up and which will go down. Then they share the information with the public, most of the time for a fee.
Is there a wrong time to invest in stock market?
Unfortunately, it seems that there is a wrong time. Most people have the tendency to invest at the wrong time. This is where the old adage of “buy low and sell high” comes into play. A smart investor waits for the stock to go low so he can buy low and sell high. That’s why billionaire investor Warren Buffet says, “Be greedy when others are fearful and be fearful when others are
greedy.” In other words, don’t completely follow the crowd and don’t be afraid to invest when you see an idea and everyone else is scared. It may run contrary to common thought, but smart investors across the globe see the best time to invest in the stock market when its performing its worst. When the stock market sinks or stalls, it is a buyers market. This is simply due to the fact that stocks are fluid forms of value; they change in worth often and sometimes drastically. When the economy starts to underperform, people tend to sell of their investments. It is an obvious response to people seeing their stock portfolio values go lower and lower. These mass pull-outs of investments cause the overall market to go into panic mode, dropping prices for stocks across the board. So, what does a wise investor with skilled investing strategies do in this situation? Buy! But of course, there are other factors at play such as market conditions, currency trading and aspects specific to a particular stock should also be taken into consideration when buying stocks. However, if you have the cash in hand to buy into stocks while they are undervalued due to market conditions, you can make some excellent investments. But there is no perfect time of day, hour or date to buy stocks. Timing stock buys is also based on other mitigating factors. Most investors however, do the opposite and buy high because they believe it’ll keep going higher. We see in practice, most people will only seek financial advice when the market is ‘good’, which ironically is not the best time to buy. Financial advisers who could only earn a commission selling investment products will tell their clients to buy despite it being the worst time to do so. Hardly anyone would seek advice from financial advisers when times are bad. In fact, many financial advisers themselves would recommend ‘safer’ products when actually it is the most viable time to enter into equity markets. Buying in a down market results in ‘cost averaging’, which means that you have a greater opportunity to gather large gains in the future. However, there is more to understanding when to buy stocks than simply ‘buy low, sell high’ or ‘buy in a down market’. The following are tips to help you decide when to buy stocks in order to maximize your future returns.
Tips on when to buy
Research about the fees that are associated with buying and selling stocks. These fees directly eat up your profits. Because of this, it is often beneficial to buy stocks in bulk and hold for
awhile rather than buying and selling rapidly. Know the company. Even if a stock is at a historically low price, you may not want to buy. Consider whether a rebound is expected and if so, what time frame this will require. You want to purchase stocks in a healthy company that will see future returns, not one that is on a fatal path downward. First, be sure that you are well-educated. Do your own research: Ask other investors, try to gather information from the regulator, publications and articles and by speaking to persons in the industry about the company, the industry and any fees you may incur from purchasing stocks. Know the industry. Selection of the market leader and the industries is critical. Trust your gut. Money, including investments is tied to emotions. Follow research and advice, also trust your instincts. Make decisions so that you will be able to sleep well at night. No one likes to lose money in an investment. Therefore, perhaps more complicated than simply buying a stock is the process of selling stock. Stock is easy to sell. Simply contacting your broker or utilizing the website of your online stockbroker can effect this transaction for you in minutes. Its not the act of selling stock, but rather timing stock sales to maximize profits where the need for precision lies. There is no such thing as the best time to sell stock when speaking of the hours in a day. The best time to sell stock is pertinent to each investor, the market state and the stock in question. Certainly, the best answer to when is the best time to sell a stock is to be selling stock before it declines in value. This in theory is nice: Make the most money one can on a stock or bond and get out and sell. However, in reality well timing stock sales takes practice, diligence and at times a lot of patience.
Most investors fail to make basic criteria before investing: Profit goals. When investing in a stock, one should establish a set amount of profit to make on a stock. When this limit is reached, selling stock should not be a thought, but rather an act. For example, to purchase stock in Company X for the price of Rs. 10 per share at its current trading value establishes your starting point. Say you set your profit goals for this particular stock at 30% or a Rs.3.00 increase in stock price- a healthy return on any stock investment. So, when the stock reaches Rs.13, you have reached your profit goal for this stock and you should sell. Walking away with 30% gain on your investment is excellent and far better than your money would have earned in near any other place. The average investor who loses money, or simply does not maximize the amount of money they could have made buying and selling stocks usually falls into this pitfall: Not selling stocks. Many investors watch their stocks soar up and then unable to contemplate their stock no longer increasing in value, hold on to as it falls. This is the most common problem with investors timing stock sales. They simply cannot let go of their stocks and therefore follow them all the way down.
[Question]
=======
According to Warren Buffet, when is the best time to invest in the stock market?
[Task Description]
=======
Provide the answer based solely on the document provided. The answer should be in complete sentences. |
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | I had to walk to the shop today and it was pouring down rain and it was freezing outside. I was dressed accordingly with a hat and a jacket, but I'm worried about getting sick. I've always heard that being outside in cold and wet weather results in colds. Will I get sick? I have a bit of a runny nose now. | Cold air cannot make you sick, but the odds of catching the common cold or influenza (the flu) are highest during the fall and winter.
Although cold weather doesn't cause a cold or the flu, it can set the stage for certain factors that decrease your immunity and increase opportunities to catch these illnesses.1
This article explores the factors associated with cold weather that increase the chances of getting sick.
There is some evidence to suggest that viruses spread more easily through cold, dry air. Temperature and humidity, therefore, may affect your risk of catching a virus.
When it is cold outside, the air is drier both outdoors and inside (due to heating), drying out mucous membranes and making it easier for germs to take hold.
The nose is an ideal host for many viruses due to its cooler temperature. The typical core body temperature is 98.6 degrees F, but the nasal cavity temperature is lower at 91.4 degrees F. Research suggests that rhinoviruses (viruses that cause the common cold) do not replicate efficiently at body temperature, but the cooler temperature in the nose may make it an ideal breeding ground for viruses.2
One study suggests colder temperatures on their own do not increase the spread of colds and flu, but temperature and humidity fluctuations do. Researchers cross-referenced confirmed cases of rhinovirus with weather data over a set period of time and discovered that decreases in either temperature or humidity over a three-day period increased the risk of rhinovirus infections.3
The study, which involved 892 men in the Finnish military, also suggests that breathing cold air may contribute to the spread of infection into the lungs. This is based on earlier research that found lung temperature can be lowered by inhaling cold air. However, researchers also noted that the risk of rhinovirus infection is reduced at subfreezing temperatures and higher humidity.3
Warmer air does not necessarily kill viruses, either, as is evidenced by the spread of colds and flu in tropical areas where it does not get cold. Cold and flu cases are more prevalent in tropical climates during the rainy season. This is likely due to people spending more time indoors when it's raining, putting them in closer contact with others than during the dry season.
Reduced Immune Function
People may also be more prone to catching a cold or flu in the winter due to lower immunity. Fewer daylight hours and less time spent outside mean less exposure to sunlight, which the body uses to make vitamin D. In addition, lack of activity during cold weather may also mean reduced immunity.
Vitamin D
Vitamin D plays a critical role in the immune system helping to keep you healthy. Vitamin D deficiency is linked to an increased risk of viral infections, including those in the respiratory tract.5
Researchers are studying whether vitamin D supplements can help improve immunity when vitamin D levels are low. A review of 25 studies concluded that vitamin D supplementation was safe and it protected against acute respiratory tract infection. People who were very vitamin D deficient and those not receiving high single doses of vitamin D experienced the most benefit.6
Lack of Exercise
People also tend to be less active in cold weather. While it is not clear exactly if or how exercise increases your immunity to certain illnesses, and no solid evidence, there are several theories, about exercise, such as:7
It improves circulation, allowing white blood cells to detect and fight an infection faster.
It increases body temperature during and right after a workout, which may work like a fever to prevent bacteria from growing.
It may help to flush bacteria from the lungs and airways, reducing your chances of getting sick.
It lowers levels of stress hormones, which may protect against illness.
Close Contact During Colder Months
Viruses rely on the cells of other organisms to live and replicate. They are transmitted from host to host when infected respiratory secretions make their way into the mucous membranes of a healthy person. How transmission occurs may include:8
Direct person-to-person contact, such as hugging, kissing, or shaking hands
Inhaling small droplets in the air from a sneeze or cough
Touching something that has the virus on it (like a doorknob, drinking glass, utensils, or toys) and then touching your mouth, nose, or eyes
It logically follows, then, that the closer you are to people and the more you share a space, the more likely transmission is. In the winter, many people tend to take their outdoor activities inside. For example:
School recess being held in a gym, rather than outside
People walk around crowded shopping centers rather than on a track or neighborhood
People staying indoors more hours of the day
This close contact during colder months increases the likelihood of passing germs.
Protection From Cold and Flu
The most important thing to remember during cold and flu season is to protect yourself and stop the spread of these germs when you are around other people. Steps you can take to prevent cold and flu include:98
Wash your hands often or use an alcohol-based hand sanitizer if soap and water aren't available.
Avoid close contact with people who are sick.
Stay home when you are sick.
Cover your mouth and nose with a tissue or the inside of your elbow when you cough or sneeze.
Wear a face mask in crowded places.
Try to avoid touching your eyes, nose, or mouth as much as possible, since that is how most respiratory germs enter the body.
Clean and disinfect frequently touched surfaces at home, work, or school, especially when someone is sick.
Get your yearly flu vaccine and any other recommended vaccines.
Get enough sleep.
Exercise regularly.
Drink plenty of fluids.
Follow a healthy diet. | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
I had to walk to the shop today and it was pouring down rain and it was freezing outside. I was dressed accordingly with a hat and a jacket, but I'm worried about getting sick. I've always heard that being outside in cold and wet weather results in colds. Will I get sick? I have a bit of a runny nose now.
Cold air cannot make you sick, but the odds of catching the common cold or influenza (the flu) are highest during the fall and winter.
Although cold weather doesn't cause a cold or the flu, it can set the stage for certain factors that decrease your immunity and increase opportunities to catch these illnesses.1
This article explores the factors associated with cold weather that increase the chances of getting sick.
There is some evidence to suggest that viruses spread more easily through cold, dry air. Temperature and humidity, therefore, may affect your risk of catching a virus.
When it is cold outside, the air is drier both outdoors and inside (due to heating), drying out mucous membranes and making it easier for germs to take hold.
The nose is an ideal host for many viruses due to its cooler temperature. The typical core body temperature is 98.6 degrees F, but the nasal cavity temperature is lower at 91.4 degrees F. Research suggests that rhinoviruses (viruses that cause the common cold) do not replicate efficiently at body temperature, but the cooler temperature in the nose may make it an ideal breeding ground for viruses.2
One study suggests colder temperatures on their own do not increase the spread of colds and flu, but temperature and humidity fluctuations do. Researchers cross-referenced confirmed cases of rhinovirus with weather data over a set period of time and discovered that decreases in either temperature or humidity over a three-day period increased the risk of rhinovirus infections.3
The study, which involved 892 men in the Finnish military, also suggests that breathing cold air may contribute to the spread of infection into the lungs. This is based on earlier research that found lung temperature can be lowered by inhaling cold air. However, researchers also noted that the risk of rhinovirus infection is reduced at subfreezing temperatures and higher humidity.3
Warmer air does not necessarily kill viruses, either, as is evidenced by the spread of colds and flu in tropical areas where it does not get cold. Cold and flu cases are more prevalent in tropical climates during the rainy season. This is likely due to people spending more time indoors when it's raining, putting them in closer contact with others than during the dry season.
Reduced Immune Function
People may also be more prone to catching a cold or flu in the winter due to lower immunity. Fewer daylight hours and less time spent outside mean less exposure to sunlight, which the body uses to make vitamin D. In addition, lack of activity during cold weather may also mean reduced immunity.
Vitamin D
Vitamin D plays a critical role in the immune system helping to keep you healthy. Vitamin D deficiency is linked to an increased risk of viral infections, including those in the respiratory tract.5
Researchers are studying whether vitamin D supplements can help improve immunity when vitamin D levels are low. A review of 25 studies concluded that vitamin D supplementation was safe and it protected against acute respiratory tract infection. People who were very vitamin D deficient and those not receiving high single doses of vitamin D experienced the most benefit.6
Lack of Exercise
People also tend to be less active in cold weather. While it is not clear exactly if or how exercise increases your immunity to certain illnesses, and no solid evidence, there are several theories, about exercise, such as:7
It improves circulation, allowing white blood cells to detect and fight an infection faster.
It increases body temperature during and right after a workout, which may work like a fever to prevent bacteria from growing.
It may help to flush bacteria from the lungs and airways, reducing your chances of getting sick.
It lowers levels of stress hormones, which may protect against illness.
Close Contact During Colder Months
Viruses rely on the cells of other organisms to live and replicate. They are transmitted from host to host when infected respiratory secretions make their way into the mucous membranes of a healthy person. How transmission occurs may include:8
Direct person-to-person contact, such as hugging, kissing, or shaking hands
Inhaling small droplets in the air from a sneeze or cough
Touching something that has the virus on it (like a doorknob, drinking glass, utensils, or toys) and then touching your mouth, nose, or eyes
It logically follows, then, that the closer you are to people and the more you share a space, the more likely transmission is. In the winter, many people tend to take their outdoor activities inside. For example:
School recess being held in a gym, rather than outside
People walk around crowded shopping centers rather than on a track or neighborhood
People staying indoors more hours of the day
This close contact during colder months increases the likelihood of passing germs.
Protection From Cold and Flu
The most important thing to remember during cold and flu season is to protect yourself and stop the spread of these germs when you are around other people. Steps you can take to prevent cold and flu include:98
Wash your hands often or use an alcohol-based hand sanitizer if soap and water aren't available.
Avoid close contact with people who are sick.
Stay home when you are sick.
Cover your mouth and nose with a tissue or the inside of your elbow when you cough or sneeze.
Wear a face mask in crowded places.
Try to avoid touching your eyes, nose, or mouth as much as possible, since that is how most respiratory germs enter the body.
Clean and disinfect frequently touched surfaces at home, work, or school, especially when someone is sick.
Get your yearly flu vaccine and any other recommended vaccines.
Get enough sleep.
Exercise regularly.
Drink plenty of fluids.
Follow a healthy diet.
https://www.verywellhealth.com/does-cold-weather-cause-the-cold-or-flu-770379 |
Respond only with information present in the document. If the information is not present, respond with "This information is not available". When possible, use quotations and cite the document directly. | What do I need to know about my Financial Accounting program? | 1
Area of Interest: Business
Business - Accounting
Ontario College Diploma
2 Years
Program Code: 0214C01FWO
Ottawa Campus
Our Program
Get the essential skills to start a career in Accounting.
The Business - Accounting Ontario College Diploma program balances accounting theory with
tools used in the industry. This two-year program equips you with the essential skills for various
entry-level accounting positions.
Learn how to complete accounting tasks, from conducting bookkeeping responsibilities to
preparing financial statements and personal income tax returns. Expand your knowledge of various
business concepts including economics and finance.
Explore accounting concepts while sharpening your communication, math and technological skills.
Courses incorporate accounting software to strengthen your computer literacy and provide you
with up-to-date technical skills, which are essential in this field.
In the program`s final semester, you have the opportunity to apply for a work placement to
practise your skills in a real work setting. See Additional Information for eligibility requirements.
Students considering a professional accounting designation or an accounting credential are
advised to make inquiries with the Chartered Professional Accountants of Ontario (CPA Ontario)
before deciding to complete this program. See Additional Information for further details.
This program prepares you for entry-level positions in:
- financial accounting
- managerial accounting
- payables and receivables
- taxation
Graduates typically find employment in roles such as:
- accounts payable clerk
- accounts receivable clerk
- bookkeeper payroll clerk
- junior staff accountant
SUCCESS FACTORS
This program is well-suited for students who:
- Enjoy problem solving and critical-thinking activities.
- Are inquisitive and have an analytical nature.
- Can work well independently and in a group.
2
Business - Accounting
- Are detailed-oriented, organized and adaptable.
- Are comfortable using a variety of computer applications.
- Possess a high standard of ethics.
Employment
Graduates may pursue employment opportunities in various entry-level positions including
accounts receivables or payables, bookkeeping or payroll responsibilities. Roles include:
accounting assistant, accounts payable clerk, accounts receivable clerk, bookkeeper, and payroll
clerk.
Learning Outcomes
The graduate has reliably demonstrated the ability to:
- Record financial transactions in compliance with Canadian Generally Accepted Accounting
Principles for sole proprietorships, partnerships, private enterprises, publicly accountable
enterprises and non-profit organizations.
- Prepare and present financial statements, reports and other documents in compliance with
Canadian Generally Accepted Accounting Principles for sole proprietorships, partnerships and
private enterprises.
- Contribute to recurring decision-making by applying fundamental management accounting
concepts.
- Prepare individuals` income tax returns and basic tax planning in compliance with relevant
legislation and regulations.
- Analyze organizational structures, the interdependence of functional areas, and the impact
those relationships can have on financial performance.
- Analyze, within a Canadian context, the impact of economic variables, legislation, ethics,
technological advances and the environment on an organization`s operations.
- Outline the elements of an organization`s internal control system and risk management.
- Contribute to recurring decision-making by applying fundamental financial management
concepts.
- Identify and apply discipline-specific practices that contribute to the local and global
community through social responsibility, economic commitment and environmental
stewardship.
Program of Study
Level: 01 Courses Hours
ACC2201 Financial Accounting I 56.0
BUS2301 Business Computer Applications 42.0
ENL1813B Communications I 42.0
MGT2201 Business Fundamentals 42.0
QUA2210 Basic Business Mathematics 56.0
Level: 02 Courses Hours
ACC2202 Financial Accounting II 56.0
ACC2343 Spreadsheet Applications 56.0
3
Business - Accounting
ECO2200 Economic Issues 42.0
ENL1823B Communications II 42.0
FIN2230 Finance 42.0
Level: 03 Courses Hours
ACC2209 Financial Accounting III 70.0
ACC2233 Management Accounting I 56.0
ACC2262 Introduction to Personal Taxation 56.0
ACC2385 Accounting Software Applications 56.0
English General Education Elective: choose 1 Courses Hours
ENL1725 Canadian Identity 42.0
ENL1726 Symbols, Text and Meaning 42.0
ENL1798 Contemporary Canadian Issues 42.0
ENL1825 Communication Dynamics 42.0
ENL1829 The Art of Oratory 42.0
Level: 04 Courses Hours
ACC2211 Payroll and Compliance 56.0
ACC2234 Management Accounting II 56.0
ACC2265 Audit Principles and Business Issues 56.0
Elective: choose 1 Courses Hours
ACC0012 Integrated Accounting Practice 56.0
ACC0044 Work Experience 56.0
Choose one from equivalencies: Courses Hours
GED0214C General Education Elective 42.0
Fees for the 2023/2024 Academic Year
Tuition and related ancillary fees for this program can be viewed by using the Tuition and Fees
Estimator tool at https://www.algonquincollege.com/fee-estimator .
Further information on fees can be found by visiting the Registrar`s Office website at
https://www.algonquincollege.com/ro .
Fees are subject to change.
Additional program related expenses include:
- Books and supplies cost approximately $600 to $800 per term. However in Levels 03 and 04
of the program, books may cost up to $1,000.
- Books and supplies can be purchased from the campus store. For more information visit
https://www.algonquincollege.com/coursematerials .
4
Business - Accounting
Admission Requirements for the 2024/2025 Academic Year
College Eligibility
- Ontario Secondary School Diploma (OSSD) or equivalent. Applicants with an OSSD showing
senior English and/or Mathematics courses at the Basic Level, or with Workplace or Open
courses, will be tested to determine their eligibility for admission; OR
- Academic and Career Entrance (ACE) certificate; OR
- General Educational Development (GED) certificate; OR
- Mature Student status (19 years of age or older and without a high school diploma at the
start of the program). Eligibility may be determined by academic achievement testing for which
a fee of $50 (subject to change) will be charged.
Program Eligibility
- English, Grade 12 (ENG4C or equivalent).
- Mathematics, Grade 12 (MAP4C or equivalent).
- Applicants with international transcripts must provide proof of the subject-specific
requirements noted above and may be required to provide proof of language proficiency.
Domestic applicants with international transcripts must be evaluated through the International
Credential Assessment Service of Canada (ICAS) or World Education Services (WES).
- IELTS-International English Language Testing Service (Academic) Overall band of 6.0 with a
minimum of 5.5 in each band; OR TOEFL-Internet-based (iBT) Overall 80, with a minimum of 20
in each component: Reading 20; Listening 20; Speaking 20; Writing 20; OR Duolingo English
Test (DET) Overall 110, minimum of 110 in Literacy and no score below 95.
- Not sure if you meet all of the requirements? Academic Upgrading may be able to help with
that: https://www.algonquincollege.com/access .
Should the number of qualified applicants exceed the number of available places, applicants will be
selected on the basis of their proficiency in English and mathematics.
Admission Requirements for 2023/2024 Academic Year
College Eligibility
- Ontario Secondary School Diploma (OSSD) or equivalent. Applicants with an OSSD showing
senior English and/or Mathematics courses at the Basic Level, or with Workplace or Open
courses, will be tested to determine their eligibility for admission; OR
- Academic and Career Entrance (ACE) certificate; OR
- General Educational Development (GED) certificate; OR
- Mature Student status (19 years of age or older and without a high school diploma at the
start of the program). Eligibility may be determined by academic achievement testing for which
a fee of $50 (subject to change) will be charged.
Program Eligibility
- English, Grade 12 (ENG4C or equivalent).
- Mathematics, Grade 12 (MAP4C or equivalent).
- Applicants with international transcripts must provide proof of the subject specific
requirements noted above and may be required to provide proof of language proficiency.
Domestic applicants with international transcripts must be evaluated through the International
Credential Assessment Service of Canada (ICAS) or World Education Services (WES).
- IELTS-International English Language Testing Service (Academic) Overall band of 6.0 with a
minimum of 5.5 in each band; OR TOEFL-Internet-based (iBT) Overall 80, with a minimum of 20
in each component: Reading 20; Listening 20; Speaking 20; Writing 20.
5
Business - Accounting
Not sure if you meet all of the requirements? Academic Upgrading may be able to help with that:
https://www.algonquincollege.com/access/ .
Should the number of qualified applicants exceed the number of available places, applicants will be
selected on the basis of their proficiency in English and mathematics.
Application Information
BUSINESS - ACCOUNTING
Program Code 0214C01FWO
Applications to full-time day programs must be submitted with official transcripts showing
completion of the academic admission requirements through:
ontariocolleges.ca
60 Corporate Court
Guelph, Ontario N1G 5J3
1-888-892-2228
Students currently enrolled in an Ontario secondary school should notify their Guidance Office
prior to their online application at http://www.ontariocolleges.ca/ .
Applications for Fall Term and Winter Term admission received by February 1 will be given equal
consideration. Applications received after February 1 will be processed on a first-come, first-served
basis as long as places are available.
International applicants please visit this link for application process information:
https://algonquincollege.force.com/myACint/ .
For further information on the admissions process, contact:
Registrar`s Office
Algonquin College
1385 Woodroffe Ave
Ottawa, ON K2G 1V8
Telephone: 613-727-0002
Toll-free: 1-800-565-4723
TTY: 613-727-7766
Fax: 613-727-7632
Contact: https://www.algonquincollege.com/ro
Additional Information
This program offers a September start or January start. Students who start in January must
complete their second level of the program in the Spring term and continue into the third level in
the Fall term.
Classes in this program may be scheduled between 8:00 AM and 10:00 PM, Monday through
Friday.
Work placement is an option available to students in the fourth level of this program. Work
placement is only available in the Winter term. Participants of the optional work placement will
receive a course credit for ACC0044 (Work Experience) in lieu of taking a fifth course on campus
during the fourth level of the program. Students must meet eligibility requirements in order to
participate in the work placement.
To be eligible to apply for work placement, students must be registered full-time with the regular
on-campus program, must have completed all level 1, 2 and 3 courses, must not have any academic
encumbrances and must meet certain academic standings. Due to the high demand for work
placements, some students may be required to secure their own placement subject to approval by
the program coordinator.
Students considering completing a degree after their diploma may be able to apply some courses
towards a degree through various university articulation agreements. For further information see
https://www.algonquincollege.com/degree-pathways/list/ .
Students considering pursuing a professional accounting designation are advised to make inquiries
with the Chartered Professional Accountants of Ontario (CPA Ontario). Please note that Algonquin
6
Business - Accounting
College courses are not directly transferrable to CPA unless they are transferred through a
recognized articulation agreement with a university. For further information see
http://www.cpaontario.ca/become-a-cpa/get-started .
Course Descriptions
ACC0012 Integrated Accounting Practice
Students draw upon knowledge learned throughout the program to participate in weekly duties
that simulate authentic business practices. Students integrate and apply their knowledge of
fundamental accounting and taxation to complete various tasks using professional business writing
skills and computer software.
Prerequisite(s): ACC2209 and ACC2233 and ACC2385 or ACC2341 and ACC2354 and ACC2385
Corerequisite(s):none
ACC0044 Work Experience
Accounting experience is advantageous when students search for work after graduation. Students
apply the skills and knowledge acquired to date in the program to a practical work environment.
Students report to a select employer and complete accounting-related tasks. Upon completion of
the work placement, employers and students rate the experience.
Prerequisite(s): ACC2209 and ACC2233 and ACC2262 and ACC2385 or ACC2341 and ACC2344 and
ACC2354 and ACC2385
Corerequisite(s):none
ACC2201 Financial Accounting I
This is the first course in a series of three financial accounting courses in this program. Students
learn to identify, measure, record and report financial transactions. Students learn the
fundamentals of the accounting cycle necessary to complete the financial statements and
accounting records of a business. Through a combination of in class lectures, practical exercises
and the use of computer assisted tools, students develop an understanding of essential accounting
concepts necessary for future studies.
Prerequisite(s): none
Corerequisite(s):none
ACC2202 Financial Accounting II
Building on previous studies in financial accounting, students expand their knowledge of
fundamental accounting concepts involved in measuring and recording financial transactions,
including analyzing these transactions and reporting them in the financial records of a business.
Students experience a combination of in class lectures, practical exercises and the use of
computerized tools to aid in the progress and understanding of vital accounting concepts.
Prerequisite(s): ACC2201 or ACC1100 and ACC1211 or ACC2310
Corerequisite(s):none
ACC2209 Financial Accounting III
This is the third and final financial accounting course in the program. Students examine the
transactions specific to corporations as well as more complex accounting topics. This course builds
on the material learned in the previous two financial accounting courses. Through a combination of
in class lectures, practical exercises and use of computer assisted tools, students develop an
understanding of essential accounting concepts necessary for the work place.
Prerequisite(s): ACC2202 or ACC2341
Corerequisite(s):none
ACC2211 Payroll and Compliance
7
Business - Accounting
Payroll and statutory compliance reporting is mandatory for every business to adhere to. Students
learn how to apply payroll legislation to calculate deductions, net pay, and remittances, and
complete year-end payroll reporting. Students are introduced to the different types of
requirements with which businesses are expected to comply, including GST/HST, QST, EHT and
workers' compensation. Through a combination of theory and practical activities, students prepare
these submissions and calculations by reading through relevant legislation and completing
activities.
Prerequisite(s): ACC2202 or ACC2341
Corerequisite(s):none
ACC2233 Management Accounting I
Managerial accounting aids organizations in making well-informed decisions helping businesses to
succeed. Students are introduced to key workplace skills by exploring the goals, methods and
ethics of managerial accounting including product costing methods and the flow of costs for
manufacturing companies. Additionally, students will focus on decision-making tools including,
cost-volume-profit and contribution analysis. The curriculum is delivered in lecture, case study and
problem-solving format.
Prerequisite(s): ACC2202 or ACC2341
Corerequisite(s):none
ACC2234 Management Accounting II
Students continue to enhance key workplace skills and business ethics by further exploring the
goals and methods of managerial accounting. Specific topic areas include merchandising
budgeting, performance evaluation in decentralized businesses and financial statement analysis.
Students will also focus on business decisions using managerial accounting tools such as relevant
costing. The curriculum is delivered in lecture, case and problem-solving format making extensive
use of Excel.
Prerequisite(s): ACC2233 and ACC2343 or ACC2343 and ACC2354
Corerequisite(s):none
ACC2262 Introduction to Personal Taxation
The ability to complete personal tax returns is an essential skill to have in the accounting field.
Using a variety of methods, including authentic tax cases, lectures and classroom discussions,
students use professional tax software to prepare personal Canadian tax returns for individuals
and unincorporated businesses. Basic principles of tax planning and tax legislation are covered. It is
strongly recommended that students use a Windows-based laptop (not a Mac).
Prerequisite(s): ACC2202 or ACC2313
Corerequisite(s):none
ACC2265 Audit Principles and Business Issues
Students are introduced to current business issues relevant to Canadian organizations as well as
key auditing concepts that help to guide business ethics and decisions. Topics discussed include
the Canadian business environment and the current issues it faces, the need for greater ethical and
responsible behaviour in light of recent business scandals, fraud and the need for internal controls,
risk management and financial statement analysis. Various types of audit and techniques are
examined by students. Classroom lectures are enhanced by reading current material and
researching information using various tools.
Prerequisite(s): ACC2209 or ACC2341
Corerequisite(s):none
ACC2343 Spreadsheet Applications
Students enhance their knowledge of spreadsheets learned in BUS2301. Using Microsoft Excel,
students explore some of the more advanced Excel features, such as financial functions, charts,
8
Business - Accounting
logical functions, pivot tables, lists and look-up tables. These spreadsheet concepts are explored
through Excel-based lectures and hybrid activities including learning resource applications.
Prerequisite(s): ACC2201 and BUS2301 or ACC2313 and BUS2301 or ACC1100 and ACC1211 and
BUS2301
Corerequisite(s):none
ACC2385 Accounting Software Applications
It is a workplace expectation that students are comfortable using accounting software on a day-today basis. Students gain practical experience using computerized accounting software to record
transactions and prepare financial statements. Curriculum is delivered in lecture format and by
hands-on completion of cases using accounting software. A Windows-based laptop (not a Mac) is
strongly recommended.
Prerequisite(s): ACC2341 or ACC2202
Corerequisite(s):none
BUS2301 Business Computer Applications
The knowledge gained in this course provides students with a solid foundation for future learning in
other studies, and in business. Students learn a variety of tasks in Windows file management.
Students also learn tasks and produce assignments in Microsoft Office for PC. Web-based software
running computer-based simulations and assessments are used to support and enrich the learning
experience. It is strongly recommended that students use a PC. Mac students may be required to
install and learn additional software to successfully complete the course.
Prerequisite(s): none
Corerequisite(s):none
ECO2200 Economic Issues
Knowledge of contemporary economic issues is essential to understanding the world in which we
live. Students examine the fundamental economic issues faced by modern economies with an
emphasis on the use of economic models to analyze economic developments accurately and
objectively. Key economic problems faced by society, and policy alternatives that governments
may use to deal with these problems are also investigated. Influence of economics on student civic,
working and personal lives is explored through lectures, discussions, and monitoring of current
economic events.
Prerequisite(s): none
Corerequisite(s):none
ENL1725 Canadian Identity
Canadian identity is challenging to define, but depictions of our multicultural society are found and
explored in our writing. This course explores the importance of writers' perceptions of Canada,
how they promote their ideas through publishing, and how those published works have affected
Canadian society in recent history. Students are introduced to a wide range of writing with the aim
of exploring the theme of Canadian identity while enhancing students' awareness of the ethical
considerations necessary for a just society.
Prerequisite(s): none
Corerequisite(s):none
ENL1726 Symbols, Text and Meaning
Symbols and text are used to express, evoke, and manipulate an entire range of human emotions
and reactions. In this interactive, discussion-based course, students will explore historical and
contemporary approaches to using symbols, text, and language in conceptual and contemporary
art, graphic design and advertising, poetry and lyrics, and in online technology. Through discussion,
analysis, informal debate, and critical thinking, students will explore how symbols and text influence
individuals, society and culture.
9
Business - Accounting
Prerequisite(s): none
Corerequisite(s):none
ENL1798 Contemporary Canadian Issues
A critical understanding of contemporary Canadian issues is vital to being an active member in our
democratic society. Students explore a variety of topics and analyze their ethical implications and
relevance to Canadian life. Discussions, debates and other collaborative activities offer
opportunities to consider recent controversies from different perspectives, and use of a variety of
media (e.g. newspapers, articles, and other resources online) allows for in-depth reflection on the
history and current state of a range of social and political topics.
Prerequisite(s): none
Corerequisite(s):none
ENL1813B Communications I
Communication remains an essential skill sought by employers, regardless of discipline or field of
study. Using a practical, vocation-oriented approach, students develop stronger grammar and
business-writing skills to become effective business communicators. Through a combination of
lectures, exercises, and independent learning, students practise writing, speaking, reading,
listening, locating and documenting information, and using technology to communicate
professionally. Students develop and strengthen communication skills that contribute to success in
both educational and workplace environments.
Prerequisite(s): none
Corerequisite(s):none
ENL1823B Communications II
Students continue to expand their practical writing and speaking skills for successful
communication in business. Using real-life scenarios and research skills, they produce informal
reports and proposals, deliver presentations to a live audience supported by technology, and
create a job-search package. Students create professional documents, such as information reports,
progress reports, justification/recommendation reports, summary reports, and minutes of meetings
to develop up-to-date writing skills. The job search package includes employment-readiness skills,
resumes, persuasive cover letters, and interview techniques. In all written work, students continue
to develop and enhance their grammar skills to meet professional, workplace standards.
Prerequisite(s): ENL1813B
Corerequisite(s):none
ENL1825 Communication Dynamics
Humans are dynamic, communicative, and socially interactive. Participants consider human
behaviour and its influence on interpersonal or impersonal connections with others by exploring
theories and ethical considerations of conformity, obedience and persuasion. Special attention is
paid to individual inner experiences, thoughts, feelings, emotions and introspections. Role play
learning and case studies allow participants to reflect and build upon their own observations and
experiences.
Prerequisite(s): none
Corerequisite(s):none
ENL1829 The Art of Oratory
From ghost stories around the campfire to political speeches by world leaders, ethical oratory
plays a significant role in human interaction. Students examine the social significance of public
speaking while developing a deeper understanding of the theory, science, elements of form, and
persuasive devices underlying this art. Building on their own stories, students prepare, deliver, and
critique group workshops, as well as design, deliver, and critique individual presentations.
10
Business - Accounting
Prerequisite(s): none
Corerequisite(s):none
FIN2230 Finance
Understanding the fundamentals of financial management is necessary for strong financial
decision-making. Students develop an understanding of the goals of financial management,
financial analysis and planning, financial forecasting, working capital management, capital
budgeting concepts including present value and cashflow analysis. Build on your knowledge of
basic accounting and economics concepts through a combination of in class lectures, practical
exercises and use of computer assisted tools.
Prerequisite(s): ACC2201 and BUS2301 or ACC2310 and BUS2301 or ACC1100 and ACC1211 and
BUS2301
Corerequisite(s):none
GED0214C General Education Elective
Students choose one course, from a group of general education electives, which meets one of the
following five theme requirements: Arts in Society, Civic Life, Social and Cultural Understanding,
Personal Understanding, and Science and Technology.
Prerequisite(s): none
Corerequisite(s):none
MGT2201 Business Fundamentals
Understanding the foundation of business is critical to anyone in the accounting field. This course
gives students a broad overview of critical elements of business including an introduction to
business structures, business processes, and various legal considerations. The use of practical
exercises provides students with a strong foundation of business knowledge for future work.
Prerequisite(s): none
Corerequisite(s):none
QUA2210 Basic Business Mathematics
The essentials of arithmetic and its applications, including fractions, decimals, percentages,
mathematics of buying and selling, payroll, taxes, depreciation and inventory are examined.
Students are introduced to the mathematics of finance, including simple and compound interest,
annuities, sinking funds, amortization and consumer credit.
Prerequisite(s): none
Corerequisite(s):none | Respond only with information present in the document. If the information is not present, respond with "This information is not available". When possible, use quotations and cite the document directly.
What do I need to know about my Financial Accounting program?
1
Area of Interest: Business
Business - Accounting
Ontario College Diploma
2 Years
Program Code: 0214C01FWO
Ottawa Campus
Our Program
Get the essential skills to start a career in Accounting.
The Business - Accounting Ontario College Diploma program balances accounting theory with
tools used in the industry. This two-year program equips you with the essential skills for various
entry-level accounting positions.
Learn how to complete accounting tasks, from conducting bookkeeping responsibilities to
preparing financial statements and personal income tax returns. Expand your knowledge of various
business concepts including economics and finance.
Explore accounting concepts while sharpening your communication, math and technological skills.
Courses incorporate accounting software to strengthen your computer literacy and provide you
with up-to-date technical skills, which are essential in this field.
In the program`s final semester, you have the opportunity to apply for a work placement to
practise your skills in a real work setting. See Additional Information for eligibility requirements.
Students considering a professional accounting designation or an accounting credential are
advised to make inquiries with the Chartered Professional Accountants of Ontario (CPA Ontario)
before deciding to complete this program. See Additional Information for further details.
This program prepares you for entry-level positions in:
- financial accounting
- managerial accounting
- payables and receivables
- taxation
Graduates typically find employment in roles such as:
- accounts payable clerk
- accounts receivable clerk
- bookkeeper payroll clerk
- junior staff accountant
SUCCESS FACTORS
This program is well-suited for students who:
- Enjoy problem solving and critical-thinking activities.
- Are inquisitive and have an analytical nature.
- Can work well independently and in a group.
2
Business - Accounting
- Are detailed-oriented, organized and adaptable.
- Are comfortable using a variety of computer applications.
- Possess a high standard of ethics.
Employment
Graduates may pursue employment opportunities in various entry-level positions including
accounts receivables or payables, bookkeeping or payroll responsibilities. Roles include:
accounting assistant, accounts payable clerk, accounts receivable clerk, bookkeeper, and payroll
clerk.
Learning Outcomes
The graduate has reliably demonstrated the ability to:
- Record financial transactions in compliance with Canadian Generally Accepted Accounting
Principles for sole proprietorships, partnerships, private enterprises, publicly accountable
enterprises and non-profit organizations.
- Prepare and present financial statements, reports and other documents in compliance with
Canadian Generally Accepted Accounting Principles for sole proprietorships, partnerships and
private enterprises.
- Contribute to recurring decision-making by applying fundamental management accounting
concepts.
- Prepare individuals` income tax returns and basic tax planning in compliance with relevant
legislation and regulations.
- Analyze organizational structures, the interdependence of functional areas, and the impact
those relationships can have on financial performance.
- Analyze, within a Canadian context, the impact of economic variables, legislation, ethics,
technological advances and the environment on an organization`s operations.
- Outline the elements of an organization`s internal control system and risk management.
- Contribute to recurring decision-making by applying fundamental financial management
concepts.
- Identify and apply discipline-specific practices that contribute to the local and global
community through social responsibility, economic commitment and environmental
stewardship.
Program of Study
Level: 01 Courses Hours
ACC2201 Financial Accounting I 56.0
BUS2301 Business Computer Applications 42.0
ENL1813B Communications I 42.0
MGT2201 Business Fundamentals 42.0
QUA2210 Basic Business Mathematics 56.0
Level: 02 Courses Hours
ACC2202 Financial Accounting II 56.0
ACC2343 Spreadsheet Applications 56.0
3
Business - Accounting
ECO2200 Economic Issues 42.0
ENL1823B Communications II 42.0
FIN2230 Finance 42.0
Level: 03 Courses Hours
ACC2209 Financial Accounting III 70.0
ACC2233 Management Accounting I 56.0
ACC2262 Introduction to Personal Taxation 56.0
ACC2385 Accounting Software Applications 56.0
English General Education Elective: choose 1 Courses Hours
ENL1725 Canadian Identity 42.0
ENL1726 Symbols, Text and Meaning 42.0
ENL1798 Contemporary Canadian Issues 42.0
ENL1825 Communication Dynamics 42.0
ENL1829 The Art of Oratory 42.0
Level: 04 Courses Hours
ACC2211 Payroll and Compliance 56.0
ACC2234 Management Accounting II 56.0
ACC2265 Audit Principles and Business Issues 56.0
Elective: choose 1 Courses Hours
ACC0012 Integrated Accounting Practice 56.0
ACC0044 Work Experience 56.0
Choose one from equivalencies: Courses Hours
GED0214C General Education Elective 42.0
Fees for the 2023/2024 Academic Year
Tuition and related ancillary fees for this program can be viewed by using the Tuition and Fees
Estimator tool at https://www.algonquincollege.com/fee-estimator .
Further information on fees can be found by visiting the Registrar`s Office website at
https://www.algonquincollege.com/ro .
Fees are subject to change.
Additional program related expenses include:
- Books and supplies cost approximately $600 to $800 per term. However in Levels 03 and 04
of the program, books may cost up to $1,000.
- Books and supplies can be purchased from the campus store. For more information visit
https://www.algonquincollege.com/coursematerials .
4
Business - Accounting
Admission Requirements for the 2024/2025 Academic Year
College Eligibility
- Ontario Secondary School Diploma (OSSD) or equivalent. Applicants with an OSSD showing
senior English and/or Mathematics courses at the Basic Level, or with Workplace or Open
courses, will be tested to determine their eligibility for admission; OR
- Academic and Career Entrance (ACE) certificate; OR
- General Educational Development (GED) certificate; OR
- Mature Student status (19 years of age or older and without a high school diploma at the
start of the program). Eligibility may be determined by academic achievement testing for which
a fee of $50 (subject to change) will be charged.
Program Eligibility
- English, Grade 12 (ENG4C or equivalent).
- Mathematics, Grade 12 (MAP4C or equivalent).
- Applicants with international transcripts must provide proof of the subject-specific
requirements noted above and may be required to provide proof of language proficiency.
Domestic applicants with international transcripts must be evaluated through the International
Credential Assessment Service of Canada (ICAS) or World Education Services (WES).
- IELTS-International English Language Testing Service (Academic) Overall band of 6.0 with a
minimum of 5.5 in each band; OR TOEFL-Internet-based (iBT) Overall 80, with a minimum of 20
in each component: Reading 20; Listening 20; Speaking 20; Writing 20; OR Duolingo English
Test (DET) Overall 110, minimum of 110 in Literacy and no score below 95.
- Not sure if you meet all of the requirements? Academic Upgrading may be able to help with
that: https://www.algonquincollege.com/access .
Should the number of qualified applicants exceed the number of available places, applicants will be
selected on the basis of their proficiency in English and mathematics.
Admission Requirements for 2023/2024 Academic Year
College Eligibility
- Ontario Secondary School Diploma (OSSD) or equivalent. Applicants with an OSSD showing
senior English and/or Mathematics courses at the Basic Level, or with Workplace or Open
courses, will be tested to determine their eligibility for admission; OR
- Academic and Career Entrance (ACE) certificate; OR
- General Educational Development (GED) certificate; OR
- Mature Student status (19 years of age or older and without a high school diploma at the
start of the program). Eligibility may be determined by academic achievement testing for which
a fee of $50 (subject to change) will be charged.
Program Eligibility
- English, Grade 12 (ENG4C or equivalent).
- Mathematics, Grade 12 (MAP4C or equivalent).
- Applicants with international transcripts must provide proof of the subject specific
requirements noted above and may be required to provide proof of language proficiency.
Domestic applicants with international transcripts must be evaluated through the International
Credential Assessment Service of Canada (ICAS) or World Education Services (WES).
- IELTS-International English Language Testing Service (Academic) Overall band of 6.0 with a
minimum of 5.5 in each band; OR TOEFL-Internet-based (iBT) Overall 80, with a minimum of 20
in each component: Reading 20; Listening 20; Speaking 20; Writing 20.
5
Business - Accounting
Not sure if you meet all of the requirements? Academic Upgrading may be able to help with that:
https://www.algonquincollege.com/access/ .
Should the number of qualified applicants exceed the number of available places, applicants will be
selected on the basis of their proficiency in English and mathematics.
Application Information
BUSINESS - ACCOUNTING
Program Code 0214C01FWO
Applications to full-time day programs must be submitted with official transcripts showing
completion of the academic admission requirements through:
ontariocolleges.ca
60 Corporate Court
Guelph, Ontario N1G 5J3
1-888-892-2228
Students currently enrolled in an Ontario secondary school should notify their Guidance Office
prior to their online application at http://www.ontariocolleges.ca/ .
Applications for Fall Term and Winter Term admission received by February 1 will be given equal
consideration. Applications received after February 1 will be processed on a first-come, first-served
basis as long as places are available.
International applicants please visit this link for application process information:
https://algonquincollege.force.com/myACint/ .
For further information on the admissions process, contact:
Registrar`s Office
Algonquin College
1385 Woodroffe Ave
Ottawa, ON K2G 1V8
Telephone: 613-727-0002
Toll-free: 1-800-565-4723
TTY: 613-727-7766
Fax: 613-727-7632
Contact: https://www.algonquincollege.com/ro
Additional Information
This program offers a September start or January start. Students who start in January must
complete their second level of the program in the Spring term and continue into the third level in
the Fall term.
Classes in this program may be scheduled between 8:00 AM and 10:00 PM, Monday through
Friday.
Work placement is an option available to students in the fourth level of this program. Work
placement is only available in the Winter term. Participants of the optional work placement will
receive a course credit for ACC0044 (Work Experience) in lieu of taking a fifth course on campus
during the fourth level of the program. Students must meet eligibility requirements in order to
participate in the work placement.
To be eligible to apply for work placement, students must be registered full-time with the regular
on-campus program, must have completed all level 1, 2 and 3 courses, must not have any academic
encumbrances and must meet certain academic standings. Due to the high demand for work
placements, some students may be required to secure their own placement subject to approval by
the program coordinator.
Students considering completing a degree after their diploma may be able to apply some courses
towards a degree through various university articulation agreements. For further information see
https://www.algonquincollege.com/degree-pathways/list/ .
Students considering pursuing a professional accounting designation are advised to make inquiries
with the Chartered Professional Accountants of Ontario (CPA Ontario). Please note that Algonquin
6
Business - Accounting
College courses are not directly transferrable to CPA unless they are transferred through a
recognized articulation agreement with a university. For further information see
http://www.cpaontario.ca/become-a-cpa/get-started .
Course Descriptions
ACC0012 Integrated Accounting Practice
Students draw upon knowledge learned throughout the program to participate in weekly duties
that simulate authentic business practices. Students integrate and apply their knowledge of
fundamental accounting and taxation to complete various tasks using professional business writing
skills and computer software.
Prerequisite(s): ACC2209 and ACC2233 and ACC2385 or ACC2341 and ACC2354 and ACC2385
Corerequisite(s):none
ACC0044 Work Experience
Accounting experience is advantageous when students search for work after graduation. Students
apply the skills and knowledge acquired to date in the program to a practical work environment.
Students report to a select employer and complete accounting-related tasks. Upon completion of
the work placement, employers and students rate the experience.
Prerequisite(s): ACC2209 and ACC2233 and ACC2262 and ACC2385 or ACC2341 and ACC2344 and
ACC2354 and ACC2385
Corerequisite(s):none
ACC2201 Financial Accounting I
This is the first course in a series of three financial accounting courses in this program. Students
learn to identify, measure, record and report financial transactions. Students learn the
fundamentals of the accounting cycle necessary to complete the financial statements and
accounting records of a business. Through a combination of in class lectures, practical exercises
and the use of computer assisted tools, students develop an understanding of essential accounting
concepts necessary for future studies.
Prerequisite(s): none
Corerequisite(s):none
ACC2202 Financial Accounting II
Building on previous studies in financial accounting, students expand their knowledge of
fundamental accounting concepts involved in measuring and recording financial transactions,
including analyzing these transactions and reporting them in the financial records of a business.
Students experience a combination of in class lectures, practical exercises and the use of
computerized tools to aid in the progress and understanding of vital accounting concepts.
Prerequisite(s): ACC2201 or ACC1100 and ACC1211 or ACC2310
Corerequisite(s):none
ACC2209 Financial Accounting III
This is the third and final financial accounting course in the program. Students examine the
transactions specific to corporations as well as more complex accounting topics. This course builds
on the material learned in the previous two financial accounting courses. Through a combination of
in class lectures, practical exercises and use of computer assisted tools, students develop an
understanding of essential accounting concepts necessary for the work place.
Prerequisite(s): ACC2202 or ACC2341
Corerequisite(s):none
ACC2211 Payroll and Compliance
7
Business - Accounting
Payroll and statutory compliance reporting is mandatory for every business to adhere to. Students
learn how to apply payroll legislation to calculate deductions, net pay, and remittances, and
complete year-end payroll reporting. Students are introduced to the different types of
requirements with which businesses are expected to comply, including GST/HST, QST, EHT and
workers' compensation. Through a combination of theory and practical activities, students prepare
these submissions and calculations by reading through relevant legislation and completing
activities.
Prerequisite(s): ACC2202 or ACC2341
Corerequisite(s):none
ACC2233 Management Accounting I
Managerial accounting aids organizations in making well-informed decisions helping businesses to
succeed. Students are introduced to key workplace skills by exploring the goals, methods and
ethics of managerial accounting including product costing methods and the flow of costs for
manufacturing companies. Additionally, students will focus on decision-making tools including,
cost-volume-profit and contribution analysis. The curriculum is delivered in lecture, case study and
problem-solving format.
Prerequisite(s): ACC2202 or ACC2341
Corerequisite(s):none
ACC2234 Management Accounting II
Students continue to enhance key workplace skills and business ethics by further exploring the
goals and methods of managerial accounting. Specific topic areas include merchandising
budgeting, performance evaluation in decentralized businesses and financial statement analysis.
Students will also focus on business decisions using managerial accounting tools such as relevant
costing. The curriculum is delivered in lecture, case and problem-solving format making extensive
use of Excel.
Prerequisite(s): ACC2233 and ACC2343 or ACC2343 and ACC2354
Corerequisite(s):none
ACC2262 Introduction to Personal Taxation
The ability to complete personal tax returns is an essential skill to have in the accounting field.
Using a variety of methods, including authentic tax cases, lectures and classroom discussions,
students use professional tax software to prepare personal Canadian tax returns for individuals
and unincorporated businesses. Basic principles of tax planning and tax legislation are covered. It is
strongly recommended that students use a Windows-based laptop (not a Mac).
Prerequisite(s): ACC2202 or ACC2313
Corerequisite(s):none
ACC2265 Audit Principles and Business Issues
Students are introduced to current business issues relevant to Canadian organizations as well as
key auditing concepts that help to guide business ethics and decisions. Topics discussed include
the Canadian business environment and the current issues it faces, the need for greater ethical and
responsible behaviour in light of recent business scandals, fraud and the need for internal controls,
risk management and financial statement analysis. Various types of audit and techniques are
examined by students. Classroom lectures are enhanced by reading current material and
researching information using various tools.
Prerequisite(s): ACC2209 or ACC2341
Corerequisite(s):none
ACC2343 Spreadsheet Applications
Students enhance their knowledge of spreadsheets learned in BUS2301. Using Microsoft Excel,
students explore some of the more advanced Excel features, such as financial functions, charts,
8
Business - Accounting
logical functions, pivot tables, lists and look-up tables. These spreadsheet concepts are explored
through Excel-based lectures and hybrid activities including learning resource applications.
Prerequisite(s): ACC2201 and BUS2301 or ACC2313 and BUS2301 or ACC1100 and ACC1211 and
BUS2301
Corerequisite(s):none
ACC2385 Accounting Software Applications
It is a workplace expectation that students are comfortable using accounting software on a day-today basis. Students gain practical experience using computerized accounting software to record
transactions and prepare financial statements. Curriculum is delivered in lecture format and by
hands-on completion of cases using accounting software. A Windows-based laptop (not a Mac) is
strongly recommended.
Prerequisite(s): ACC2341 or ACC2202
Corerequisite(s):none
BUS2301 Business Computer Applications
The knowledge gained in this course provides students with a solid foundation for future learning in
other studies, and in business. Students learn a variety of tasks in Windows file management.
Students also learn tasks and produce assignments in Microsoft Office for PC. Web-based software
running computer-based simulations and assessments are used to support and enrich the learning
experience. It is strongly recommended that students use a PC. Mac students may be required to
install and learn additional software to successfully complete the course.
Prerequisite(s): none
Corerequisite(s):none
ECO2200 Economic Issues
Knowledge of contemporary economic issues is essential to understanding the world in which we
live. Students examine the fundamental economic issues faced by modern economies with an
emphasis on the use of economic models to analyze economic developments accurately and
objectively. Key economic problems faced by society, and policy alternatives that governments
may use to deal with these problems are also investigated. Influence of economics on student civic,
working and personal lives is explored through lectures, discussions, and monitoring of current
economic events.
Prerequisite(s): none
Corerequisite(s):none
ENL1725 Canadian Identity
Canadian identity is challenging to define, but depictions of our multicultural society are found and
explored in our writing. This course explores the importance of writers' perceptions of Canada,
how they promote their ideas through publishing, and how those published works have affected
Canadian society in recent history. Students are introduced to a wide range of writing with the aim
of exploring the theme of Canadian identity while enhancing students' awareness of the ethical
considerations necessary for a just society.
Prerequisite(s): none
Corerequisite(s):none
ENL1726 Symbols, Text and Meaning
Symbols and text are used to express, evoke, and manipulate an entire range of human emotions
and reactions. In this interactive, discussion-based course, students will explore historical and
contemporary approaches to using symbols, text, and language in conceptual and contemporary
art, graphic design and advertising, poetry and lyrics, and in online technology. Through discussion,
analysis, informal debate, and critical thinking, students will explore how symbols and text influence
individuals, society and culture.
9
Business - Accounting
Prerequisite(s): none
Corerequisite(s):none
ENL1798 Contemporary Canadian Issues
A critical understanding of contemporary Canadian issues is vital to being an active member in our
democratic society. Students explore a variety of topics and analyze their ethical implications and
relevance to Canadian life. Discussions, debates and other collaborative activities offer
opportunities to consider recent controversies from different perspectives, and use of a variety of
media (e.g. newspapers, articles, and other resources online) allows for in-depth reflection on the
history and current state of a range of social and political topics.
Prerequisite(s): none
Corerequisite(s):none
ENL1813B Communications I
Communication remains an essential skill sought by employers, regardless of discipline or field of
study. Using a practical, vocation-oriented approach, students develop stronger grammar and
business-writing skills to become effective business communicators. Through a combination of
lectures, exercises, and independent learning, students practise writing, speaking, reading,
listening, locating and documenting information, and using technology to communicate
professionally. Students develop and strengthen communication skills that contribute to success in
both educational and workplace environments.
Prerequisite(s): none
Corerequisite(s):none
ENL1823B Communications II
Students continue to expand their practical writing and speaking skills for successful
communication in business. Using real-life scenarios and research skills, they produce informal
reports and proposals, deliver presentations to a live audience supported by technology, and
create a job-search package. Students create professional documents, such as information reports,
progress reports, justification/recommendation reports, summary reports, and minutes of meetings
to develop up-to-date writing skills. The job search package includes employment-readiness skills,
resumes, persuasive cover letters, and interview techniques. In all written work, students continue
to develop and enhance their grammar skills to meet professional, workplace standards.
Prerequisite(s): ENL1813B
Corerequisite(s):none
ENL1825 Communication Dynamics
Humans are dynamic, communicative, and socially interactive. Participants consider human
behaviour and its influence on interpersonal or impersonal connections with others by exploring
theories and ethical considerations of conformity, obedience and persuasion. Special attention is
paid to individual inner experiences, thoughts, feelings, emotions and introspections. Role play
learning and case studies allow participants to reflect and build upon their own observations and
experiences.
Prerequisite(s): none
Corerequisite(s):none
ENL1829 The Art of Oratory
From ghost stories around the campfire to political speeches by world leaders, ethical oratory
plays a significant role in human interaction. Students examine the social significance of public
speaking while developing a deeper understanding of the theory, science, elements of form, and
persuasive devices underlying this art. Building on their own stories, students prepare, deliver, and
critique group workshops, as well as design, deliver, and critique individual presentations.
10
Business - Accounting
Prerequisite(s): none
Corerequisite(s):none
FIN2230 Finance
Understanding the fundamentals of financial management is necessary for strong financial
decision-making. Students develop an understanding of the goals of financial management,
financial analysis and planning, financial forecasting, working capital management, capital
budgeting concepts including present value and cashflow analysis. Build on your knowledge of
basic accounting and economics concepts through a combination of in class lectures, practical
exercises and use of computer assisted tools.
Prerequisite(s): ACC2201 and BUS2301 or ACC2310 and BUS2301 or ACC1100 and ACC1211 and
BUS2301
Corerequisite(s):none
GED0214C General Education Elective
Students choose one course, from a group of general education electives, which meets one of the
following five theme requirements: Arts in Society, Civic Life, Social and Cultural Understanding,
Personal Understanding, and Science and Technology.
Prerequisite(s): none
Corerequisite(s):none
MGT2201 Business Fundamentals
Understanding the foundation of business is critical to anyone in the accounting field. This course
gives students a broad overview of critical elements of business including an introduction to
business structures, business processes, and various legal considerations. The use of practical
exercises provides students with a strong foundation of business knowledge for future work.
Prerequisite(s): none
Corerequisite(s):none
QUA2210 Basic Business Mathematics
The essentials of arithmetic and its applications, including fractions, decimals, percentages,
mathematics of buying and selling, payroll, taxes, depreciation and inventory are examined.
Students are introduced to the mathematics of finance, including simple and compound interest,
annuities, sinking funds, amortization and consumer credit.
Prerequisite(s): none
Corerequisite(s):none |
Use only the provided information to generate responses, do not use any information not found within the question and context given. | What are two trends in Sudan's banking regulations that need to be encouraged in order to make the actual financial stability match the health indicated by the banking system's EM-Z score model results? | The analysis of the statistical results obtained from the univariate financial ratios model and Ahmed (2003) model indicate that the Sudanese banks are not financially sound. The liquidity ratios show that there has been deterioration in the liquidity position of the banking industry in Sudan. Since banks depend heavily on lending to generate revenues, the shortage in liquidity weakens their financing capability, which in turn negatively affects their earnings. Furthermore, the lack of liquidity may force banks either to sell assets or pay a premium on borrowed funds. The indebtedness measures reveal that banks are highly leveraged and thus are of high risk. This asserts that the banks will find it hard to get further financing from both national and international financial markets. This high credit risk also suggests that the bank is no longer attractive for the depositors. This is confirmed by the deposits structure of banks, which are mainly demand ones. This result is expected because the Marabaha margin, which indicates the return on investment deposits, almost remains fixed at 12% over the period examined and this percentage is far below inflation levels. This explains the shrinkage in investment deposits through time and signalizes the inability of
banks to earn satisfactory profits. Additionally, the profitability measures indicate that banks do generate sufficient profits from their operations. Due to the high level of inflation, the bank's managements find it difficult to pay dividends and also secure internal fund to sustain any growth strategy. The turnover financial metrics indicate that
the management of banks are inefficient in employing their working capital to generate revenues and are generally not optimizing the utilization of assets. This inefficient use of assets justifies the low level of profitability realized by those banks.
The results obtained from the analysis of the depositors’ confidence index indicate that the depositors slightly trust the banks operating in Sudan. This finding is highly expected as the previous studies provide evidence that factors such as slumping economy, turbulent political climate, high inflation, inconsistent policies and regulations, weak transparency, undercapitalization of banks, which are all prevailing in Sudan, negatively affect the confidence of depositors in their banks. This weak trust implies that the depositors are not sure that their banks can safely secure their deposits and thus are skeptical that their banks are able to pay them back their money. The low confidence in banks also indicates that the depositors are doubtful about the competency, integrity and transparency of their banks’ management. Further, the undercapitalization of banks triggers a fear of banks failure and thus loss of depositors’
money. Additionally, the inconsistent and ever-changing government policies, especially the monetary and credit ones, the weak legal and regulatory systems and laws, the deteriorating economic conditions of the country and the political instability and erratic country foreign relationships, signal that banks will suffer from financial difficulties in
the near future and initiate a strong tendency towards cash withdrawal from banks.
The analysis also shows that the privately-owned banks do not perform better than the government-owned ones. This result may be attributed to the fact that the government-owned banks are supported by the government. That is to say, the government usually injects funds in those banks that are in bad need for financing. The same logic applies for the better performance of the specialized banks as compared to the nonspecialized ones. The specialized banks are highly propped by the government. For instance, the Central bank has decreased the legal monetary reserve required
for the banks that provide finance to agricultural, industrial and mineral mining projects, in an attempt to boost exports. With regards to the comparison of the financial health of the foreign banks with that of the national banks, the analysis indicates that the financial health of both groups is similar, which led to the reasoning that the foreign banks have not benefited from their developed expertise, overseas existence and access the international financial market to strengthen their financial positions.
The contrary conclusion arrived at by the employment of EM Z-score model that banks operating in Sudan are generally healthy and financially viable may be in the context that, though the banking sector in Sudan is not financially sound, within the near future of two years most of the banks will not be bankrupt.
Several practical implications can be derived from the results of this study. To enhance the financial health of banks and boost the level of confidence in them a number of corrective actions need to be taken by banks management as well as regulatory bodies. Enhancing transparency through adopting enforceable comprehensive disclosure measures, imposing corporate governance, strengthening banks’ capitals and lowering operating costs are some suggested corrective actions. Regulators also need to set rules that protect depositors and safeguard their money. | Use only the provided information to generate responses, do not use any information not found within the question and context given.
What are two trends in Sudan's banking regulations that need to be encouraged in order to make the actual financial stability match the health indicated by the banking system's EM-Z score model results?
The analysis of the statistical results obtained from the univariate financial ratios model and Ahmed (2003) model indicate that the Sudanese banks are not financially sound. The liquidity ratios show that there has been deterioration in the liquidity position of the banking industry in Sudan. Since banks depend heavily on lending to generate revenues, the shortage in liquidity weakens their financing capability, which in turn negatively affects their earnings. Furthermore, the lack of liquidity may force banks either to sell assets or pay a premium on borrowed funds. The indebtedness measures reveal that banks are highly leveraged and thus are of high risk. This asserts that the banks will find it hard to get further financing from both national and international financial markets. This high credit risk also suggests that the bank is no longer attractive for the depositors. This is confirmed by the deposits structure of banks, which are mainly demand ones. This result is expected because the Marabaha margin, which indicates the return on investment deposits, almost remains fixed at 12% over the period examined and this percentage is far below inflation levels. This explains the shrinkage in investment deposits through time and signalizes the inability of
banks to earn satisfactory profits. Additionally, the profitability measures indicate that banks do generate sufficient profits from their operations. Due to the high level of inflation, the bank's managements find it difficult to pay dividends and also secure internal fund to sustain any growth strategy. The turnover financial metrics indicate that
the management of banks are inefficient in employing their working capital to generate revenues and are generally not optimizing the utilization of assets. This inefficient use of assets justifies the low level of profitability realized by those banks.
The results obtained from the analysis of the depositors’ confidence index indicate that the depositors slightly trust the banks operating in Sudan. This finding is highly expected as the previous studies provide evidence that factors such as slumping economy, turbulent political climate, high inflation, inconsistent policies and regulations, weak transparency, undercapitalization of banks, which are all prevailing in Sudan, negatively affect the confidence of depositors in their banks. This weak trust implies that the depositors are not sure that their banks can safely secure their deposits and thus are skeptical that their banks are able to pay them back their money. The low confidence in banks also indicates that the depositors are doubtful about the competency, integrity and transparency of their banks’ management. Further, the undercapitalization of banks triggers a fear of banks failure and thus loss of depositors’
money. Additionally, the inconsistent and ever-changing government policies, especially the monetary and credit ones, the weak legal and regulatory systems and laws, the deteriorating economic conditions of the country and the political instability and erratic country foreign relationships, signal that banks will suffer from financial difficulties in
the near future and initiate a strong tendency towards cash withdrawal from banks.
The analysis also shows that the privately-owned banks do not perform better than the government-owned ones. This result may be attributed to the fact that the government-owned banks are supported by the government. That is to say, the government usually injects funds in those banks that are in bad need for financing. The same logic applies for the better performance of the specialized banks as compared to the nonspecialized ones. The specialized banks are highly propped by the government. For instance, the Central bank has decreased the legal monetary reserve required
for the banks that provide finance to agricultural, industrial and mineral mining projects, in an attempt to boost exports. With regards to the comparison of the financial health of the foreign banks with that of the national banks, the analysis indicates that the financial health of both groups is similar, which led to the reasoning that the foreign banks have not benefited from their developed expertise, overseas existence and access the international financial market to strengthen their financial positions.
The contrary conclusion arrived at by the employment of EM Z-score model that banks operating in Sudan are generally healthy and financially viable may be in the context that, though the banking sector in Sudan is not financially sound, within the near future of two years most of the banks will not be bankrupt.
Several practical implications can be derived from the results of this study. To enhance the financial health of banks and boost the level of confidence in them a number of corrective actions need to be taken by banks management as well as regulatory bodies. Enhancing transparency through adopting enforceable comprehensive disclosure measures, imposing corporate governance, strengthening banks’ capitals and lowering operating costs are some suggested corrective actions. Regulators also need to set rules that protect depositors and safeguard their money. |
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | In less than 250 words. Explain what I Bacillus cereus. Provide where the bacteria is found mostly. What temperature does it multiply quickly in and does it form a toxin? If so, what area of the body will contain the illness from the toxin? What are the symptoms of B cereus. | Bacillus cereus is a toxin-producing facultatively anaerobic gram-positive bacterium. The bacteria are commonly found in the environment and can contaminate food. It can quickly multiply at room temperature with an abundantly present preformed toxin. When ingested, this toxin can cause gastrointestinal illness, which is the commonly known manifestation of the disease. Gastrointestinal syndromes associated with B cereus include diarrheal illness without significant upper intestinal symptoms and a predominantly upper GI syndrome with nausea and vomiting without diarrhea. B cereus has also been implicated in infections of the eye, respiratory tract, and wounds. The pathogenicity of B cereus, whether intestinal or nonintestinal, is intimately associated with the production of tissue-destructive exoenzymes. Among these secreted toxins are hemolysins, phospholipases, and proteases.[1][2]
B cereus is a common bacterium, present ubiquitously in the environment. It can form spores which allows it to survive longer in extremes of temperature. Consequently, it is found as a contaminant of various foods, ie, beef, turkey, rice, beans, and vegetables. The diarrheal illness is often related to meats, milk, vegetables, and fish. The emetic illness is most often associated with rice products, but it has also been associated with other types of starchy products such as potatoes, pasta, and cheese. Some food mixtures (sauces, puddings, soups, casseroles, pastries, and salads, have been associated with food-borne illness in general.[3][4] Bacillus cereus is caused by the ingestion of food contaminated with enterotoxigenic B cereus or the emetic toxin. In non-gastrointestinal illness, reports of respiratory infections similar to respiratory anthrax have been attributed to B. cereus strains harboring B anthracis toxin genes.
The United States Centers for Disease Control and Prevention website states that there were 619 confirmed outbreaks of Bacillus-related poisoning from 1998 through 2015, involving 7385 illnesses. In this timeframe, there were 75 illnesses and three deaths due to confirmed Bacillus-related illnesses. The website states that there were 19,119 outbreaks overall and 373,531 illnesses. It refers to 14,681 hospitalizations and 337 deaths during this timeframe. These statistics refer to all Bacillus-related illnesses, and not just B cereus-related illnesses.[5][6]
The United States Food and Drug Administration's "Bad Bug Book" further breaks this down and states that there are an estimated 63,400 episodes of B cereus illness annually in the United States. From 2005 to 2007, there were 13 confirmed outbreaks and 37.6 suspected outbreaks involving over 1000 people. Everyone is susceptible to B. cereus infection; however, mortality related to this illness is rare. The emetic enterotoxin has been associated with a few cases of liver failure and death in otherwise healthy people. The infective dose or the number of organisms most commonly associated with human illness is 105 to 108 organisms/gram, but pathogenicity arises from the preformed toxin, not the bacteria themselves.
The pathogenicity of B cereus, whether inside or outside the gastrointestinal tract, is associated with exoenzyme production Among the secreted toxins are 4 hemolysins, 3 distinct phospholipases, and 3 pore-forming enterotoxins. The enterotoxins that activate the nod-like receptor protein-3 (NLRP3) are hemolysin BL, nonhemolytic enterotoxin (NHE), and cytotoxin K. In the small intestine, vegetative cells, ingested as viable cells or spores, produce and secrete a protein enterotoxin and induce diarrheal syndrome. Cereulide is a plasmid-encoded cyclic peptide, which is produced in food products and ingested as a formed toxin. In rabbit ligated ileal-loop assays, culture filtrates of enterotoxigenic strains induced fluid accumulation and hemolytic, cytotoxic, dermonecrosis, and increased vascular permeability in rabbit skin.[7] The enterotoxin is composed of a binding component (B) and 2 hemolytic components, designated HBL. In the diarrheal form of the disease, a nonhemolytic 3-component enterotoxin, designated NHE, has been identified. The NHE from Bacillus cereus activates the nod-like NLRP3 inflammasome and pyroptosis. This leads to programmed cell death initiated by the activation of inflammatory caspases of the infected tissue.[8] | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
In less than 250 words. Explain what I Bacillus cereus. Provide where the bacteria is found mostly. What temperature does it multiply quickly in and does it form a toxin? If so, what area of the body will contain the illness from the toxin? What are the symptoms of B cereus.
Bacillus cereus is a toxin-producing facultatively anaerobic gram-positive bacterium. The bacteria are commonly found in the environment and can contaminate food. It can quickly multiply at room temperature with an abundantly present preformed toxin. When ingested, this toxin can cause gastrointestinal illness, which is the commonly known manifestation of the disease. Gastrointestinal syndromes associated with B cereus include diarrheal illness without significant upper intestinal symptoms and a predominantly upper GI syndrome with nausea and vomiting without diarrhea. B cereus has also been implicated in infections of the eye, respiratory tract, and wounds. The pathogenicity of B cereus, whether intestinal or nonintestinal, is intimately associated with the production of tissue-destructive exoenzymes. Among these secreted toxins are hemolysins, phospholipases, and proteases.[1][2]
B cereus is a common bacterium, present ubiquitously in the environment. It can form spores which allows it to survive longer in extremes of temperature. Consequently, it is found as a contaminant of various foods, ie, beef, turkey, rice, beans, and vegetables. The diarrheal illness is often related to meats, milk, vegetables, and fish. The emetic illness is most often associated with rice products, but it has also been associated with other types of starchy products such as potatoes, pasta, and cheese. Some food mixtures (sauces, puddings, soups, casseroles, pastries, and salads, have been associated with food-borne illness in general.[3][4] Bacillus cereus is caused by the ingestion of food contaminated with enterotoxigenic B cereus or the emetic toxin. In non-gastrointestinal illness, reports of respiratory infections similar to respiratory anthrax have been attributed to B. cereus strains harboring B anthracis toxin genes.
The United States Centers for Disease Control and Prevention website states that there were 619 confirmed outbreaks of Bacillus-related poisoning from 1998 through 2015, involving 7385 illnesses. In this timeframe, there were 75 illnesses and three deaths due to confirmed Bacillus-related illnesses. The website states that there were 19,119 outbreaks overall and 373,531 illnesses. It refers to 14,681 hospitalizations and 337 deaths during this timeframe. These statistics refer to all Bacillus-related illnesses, and not just B cereus-related illnesses.[5][6]
The United States Food and Drug Administration's "Bad Bug Book" further breaks this down and states that there are an estimated 63,400 episodes of B cereus illness annually in the United States. From 2005 to 2007, there were 13 confirmed outbreaks and 37.6 suspected outbreaks involving over 1000 people. Everyone is susceptible to B. cereus infection; however, mortality related to this illness is rare. The emetic enterotoxin has been associated with a few cases of liver failure and death in otherwise healthy people. The infective dose or the number of organisms most commonly associated with human illness is 105 to 108 organisms/gram, but pathogenicity arises from the preformed toxin, not the bacteria themselves.
The pathogenicity of B cereus, whether inside or outside the gastrointestinal tract, is associated with exoenzyme production Among the secreted toxins are 4 hemolysins, 3 distinct phospholipases, and 3 pore-forming enterotoxins. The enterotoxins that activate the nod-like receptor protein-3 (NLRP3) are hemolysin BL, nonhemolytic enterotoxin (NHE), and cytotoxin K. In the small intestine, vegetative cells, ingested as viable cells or spores, produce and secrete a protein enterotoxin and induce diarrheal syndrome. Cereulide is a plasmid-encoded cyclic peptide, which is produced in food products and ingested as a formed toxin. In rabbit ligated ileal-loop assays, culture filtrates of enterotoxigenic strains induced fluid accumulation and hemolytic, cytotoxic, dermonecrosis, and increased vascular permeability in rabbit skin.[7] The enterotoxin is composed of a binding component (B) and 2 hemolytic components, designated HBL. In the diarrheal form of the disease, a nonhemolytic 3-component enterotoxin, designated NHE, has been identified. The NHE from Bacillus cereus activates the nod-like NLRP3 inflammasome and pyroptosis. This leads to programmed cell death initiated by the activation of inflammatory caspases of the infected tissue.[8]
https://www.ncbi.nlm.nih.gov/books/NBK459121/ |
System Instruction: [You must respond using a maximum of 5 sentences. You must only use information contained within the context block to formulate your response. If you cannot provide an answer using just the context block, you must use the phrase "I cannot provide an answer to your question."] | User Question: [According to the provided article, what method of temperature measurement is best for a 2-year-old child?] | Context Block: [Methods of Measurement: Methods of measuring a client’s body temperature vary based on developmental age, cognitive functioning, level of consciousness, state of health, safety, and agency/unit policy. The healthcare provider chooses the best method after considering client safety, accuracy, and least invasiveness, all contingent on the client’s health and illness state. The most accurate way to measure core body temperature is an invasive method through a pulmonary artery catheter. This is only performed in a critical care area when constant measurements are required along with other life-saving interventions. Methods of measurement include oral, axillary, tympanic, rectal, and dermal routes. Oral temperature can be taken with clients who can follow instructions, so this kind of measurement is common for clients over the age of four, or even younger children if they are cooperative. Another route other than oral (e.g., tympanic or axillary) is preferable when a client is on oxygen delivered via a face mask because this can alter the temperature. For children younger than four, axillary temperature is commonly measured unless a more accurate reading is required. Rectal temperature is an accurate way to measure body temperature (Mazerolle, Ganio, Casa, Vingren, & Klau, 2011). The rectal route is recommended by the Canadian Pediatric Society for children under two years of age (Leduc & Woods, 2017). However, this method is not used on infants younger than
thirty days or premature infants because of the risk of rectal tearing. If the rectal method is required, the procedure is generally only used by nurses and physicians. Dermal routes are alternative methods of measurement that may be used in some agencies and practice areas. This method can involve holding the device and sliding it over the skin of the forehead and then
down over the temporal artery in one motion. Dermal strips can also be placed on the forehead to measure skin temperature, but are not yet widely used, and the accuracy of this method has not yet been verified. More recently, there has been an increase in non-contact infrared thermometers particularly in the era of COVID-19 and other highly transmissible diseases. Depending on the type, these thermometers can be held at a short distance from the forehead or temporal area to measure temperature. Alternatively, some handheld thermal scanners that use an infrared camera can be held at a greater distance to screen large masses of people. Please refer to the manufacturer’s suggested
reference range for non-contact infrared thermometers and thermal scanners.] | System Instruction: [You must respond using a maximum of 5 sentences. You must only use information contained within the context block to formulate your response. If you cannot provide an answer using just the context block, you must use the phrase "I cannot provide an answer to your question."]
User Question: [According to the provided article, what method of temperature measurement is best for a 2-year-old child?]
Context Block: [Methods of Measurement: Methods of measuring a client’s body temperature vary based on developmental age, cognitive functioning, level of consciousness, state of health, safety, and agency/unit policy. The healthcare provider chooses the best method after considering client safety, accuracy, and least invasiveness, all contingent on the client’s health and illness state. The most accurate way to measure core body temperature is an invasive method through a pulmonary artery catheter. This is only performed in a critical care area when constant measurements are required along with other life-saving interventions. Methods of measurement include oral, axillary, tympanic, rectal, and dermal routes. Oral temperature can be taken with clients who can follow instructions, so this kind of measurement is common for clients over the age of four, or even younger children if they are cooperative. Another route other than oral (e.g., tympanic or axillary) is preferable when a client is on oxygen delivered via a face mask because this can alter the temperature. For children younger than four, axillary temperature is commonly measured unless a more accurate reading is required. Rectal temperature is an accurate way to measure body temperature (Mazerolle, Ganio, Casa, Vingren, & Klau, 2011). The rectal route is recommended by the Canadian Pediatric Society for children under two years of age (Leduc & Woods, 2017). However, this method is not used on infants younger than
thirty days or premature infants because of the risk of rectal tearing. If the rectal method is required, the procedure is generally only used by nurses and physicians. Dermal routes are alternative methods of measurement that may be used in some agencies and practice areas. This method can involve holding the device and sliding it over the skin of the forehead and then
down over the temporal artery in one motion. Dermal strips can also be placed on the forehead to measure skin temperature, but are not yet widely used, and the accuracy of this method has not yet been verified. More recently, there has been an increase in non-contact infrared thermometers particularly in the era of COVID-19 and other highly transmissible diseases. Depending on the type, these thermometers can be held at a short distance from the forehead or temporal area to measure temperature. Alternatively, some handheld thermal scanners that use an infrared camera can be held at a greater distance to screen large masses of people. Please refer to the manufacturer’s suggested
reference range for non-contact infrared thermometers and thermal scanners.] |
Only use the information provided in the document. | According to the article, how many new genital herpes infections are seen in the U.S in a single year? | **What is genital herpes?**
Genital herpes is a sexually transmitted disease (STD) caused by the herpes simplex virus type 1 (HSV-1) or type 2 (HSV-2).
How common is genital herpes?
Genital herpes infection is common in the United States. CDC estimated that there were 572,000 new genital herpes infections in the United States in a single year.1 Nationwide, 11.9 % of persons aged 14 to 49 years have HSV-2 infection (12.1% when adjusted for age).2 However, the prevalence of genital herpes infection is higher than that because an increasing number of genital herpes infections are caused by HSV-1. 3 Oral HSV-1 infection is typically acquired in childhood; because the prevalence of oral HSV-1 infection has declined in recent decades, people may have become more susceptible to contracting a genital herpes infection from HSV-1. 4
HSV-2 infection is more common among women than among men; the percentages of those infected during 2015-2016 were 15.9% versus 8.2% respectively, among 14 to 49 year olds. 2 This is possibly because genital infection is more easily transmitted from men to women than from women to men during penile-vaginal sex. 5 HSV-2 infection is more common among non-Hispanic blacks (34.6%) than among non-Hispanic whites (8.1%). 2 A previous analysis found that these disparities, exist even among persons with similar numbers of lifetime sexual partners. Most infected persons may be unaware of their infection; in the United States, an estimated 87.4% of 14 to 49 year olds infected with HSV-2 have never received a clinical diagnosis. 6
The age-adjusted percentage of persons in the United States infected with HSV-2 decreased from 18.0% in 1999–2000 to 12.1% in 2015-2016. 2
How do people get genital herpes?
Infections are transmitted through contact with HSV in herpes lesions, mucosal surfaces, genital secretions, or oral secretions. 5 HSV-1 and HSV-2 can be shed from normal-appearing oral or genital mucosa or skin. 7,8 Generally, a person can only get HSV-2 infection during genital contact with someone who has a genital HSV-2 infection. However, receiving oral sex from a person with an oral HSV-1 infection can result in getting a genital HSV-1 infection. 4 Transmission commonly occurs from contact with an infected partner who does not have visible lesions and who may not know that he or she is infected. 7 In persons with asymptomatic HSV-2 infections, genital HSV shedding occurs on 10.2% of days, compared to 20.1% of days among those with symptomatic infections. 8
What are the symptoms of genital herpes?
Most individuals infected with HSV are asymptomatic or have very mild symptoms that go unnoticed or are mistaken for another skin condition. 9 When symptoms do occur, herpes lesions typically appear as one or more vesicles, or small blisters, on or around the genitals, rectum or mouth. The average incubation period for an initial herpes infection is 4 days (range, 2 to 12) after exposure. 10 The vesicles break and leave painful ulcers that may take two to four weeks to heal after the initial herpes infection. 5,10 Experiencing these symptoms is referred to as having a first herpes “outbreak” or episode.
Clinical manifestations of genital herpes differ between the first and recurrent (i.e., subsequent) outbreaks. The first outbreak of herpes is often associated with a longer duration of herpetic lesions, increased viral shedding (making HSV transmission more likely) and systemic symptoms including fever, body aches, swollen lymph nodes, or headache. 5,10 Recurrent outbreaks of genital herpes are common, and many patients who recognize recurrences have prodromal symptoms, either localized genital pain, or tingling or shooting pains in the legs, hips or buttocks, which occur hours to days before the eruption of herpetic lesions. 5 Symptoms of recurrent outbreaks are typically shorter in duration and less severe than the first outbreak of genital herpes. 5 Long-term studies have indicated that the number of symptomatic recurrent outbreaks may decrease over time. 5 Recurrences and subclinical shedding are much less frequent for genital HSV-1 infection than for genital HSV-2 infection.5
What are the complications of genital herpes?
Genital herpes may cause painful genital ulcers that can be severe and persistent in persons with suppressed immune systems, such as HIV-infected persons. 5 Both HSV-1 and HSV-2 can also cause rare but serious complications such as aseptic meningitis (inflammation of the linings of the brain). 5 Development of extragenital lesions (e.g. buttocks, groin, thigh, finger, or eye) may occur during the course of infection. 5
Some persons who contract genital herpes have concerns about how it will impact their overall health, sex life, and relationships. 5,11 There can also be considerable embarrassment, shame, and stigma associated with a herpes diagnosis that can substantially interfere with a patient’s relationships. 10 Clinicians can address these concerns by encouraging patients to recognize that while herpes is not curable, it is a manageable condition. 5 Three important steps that providers can take for their newly-diagnosed patients are: giving information, providing support resources, and helping define treatment and prevention options. 12 Patients can be counseled that risk of genital herpes transmission can be reduced, but not eliminated, by disclosure of infection to sexual partners, 5 avoiding sex during a recurrent outbreak, 5 use of suppressive antiviral therapy, 5,10 and consistent condom use. 7 Since a diagnosis of genital herpes may affect perceptions about existing or future sexual relationships, it is important for patients to understand how to talk to sexual partners about STDs. One resource can be found here: www.gytnow.org/talking-to-your-partner
There are also potential complications for a pregnant woman and her newborn child. See “How does herpes infection affect a pregnant woman and her baby?” below for information about this.
What is the link between genital herpes and HIV?
Genital ulcerative disease caused by herpes makes it easier to transmit and acquire HIV infection sexually. There is an estimated 2- to 4-fold increased risk of acquiring HIV, if individuals with genital herpes infection are genitally exposed to HIV. 13-15 Ulcers or breaks in the skin or mucous membranes (lining of the mouth, vagina, and rectum) from a herpes infection may compromise the protection normally provided by the skin and mucous membranes against infections, including HIV. 14 In addition, having genital herpes increases the number of CD4 cells (the target cell for HIV entry) in the genital mucosa. In persons with both HIV and genital herpes, local activation of HIV replication at the site of genital herpes infection can increase the risk that HIV will be transmitted during contact with the mouth, vagina, or rectum of an HIV-uninfected sex partner. 14
How does genital herpes affect a pregnant woman and her baby?
Neonatal herpes is one of the most serious complications of genital herpes.5,16 Healthcare providers should ask all pregnant women if they have a history of genital herpes.11 Herpes infection can be passed from mother to child during pregnancy or childbirth, or babies may be infected shortly after birth, resulting in a potentially fatal neonatal herpes infection. 17 Infants born to women who acquire genital herpes close to the time of delivery and are shedding virus at delivery are at a much higher risk for developing neonatal herpes, compared with women who have recurrent genital herpes . 16,18-20 Thus, it is important that women avoid contracting herpes during pregnancy. Women should be counseled to abstain from intercourse during the third trimester with partners known to have or suspected of having genital herpes. 5,11
While women with genital herpes may be offered antiviral medication late in pregnancy through delivery to reduce the risk of a recurrent herpes outbreak, third trimester antiviral prophylaxis has not been shown to decrease the risk of herpes transmission to the neonate.11,21,22 Routine serologic HSV screening of pregnant women is not recommended. 11 However, at onset of labor, all women should undergo careful examination and questioning to evaluate for presence of prodromal symptoms or herpetic lesions. 11 If herpes symptoms are present a cesarean delivery is recommended to prevent HSV transmission to the infant.5,11,23 There are detailed guidelines for how to manage asymptomatic infants born to women with active genital herpes lesions.24
How is genital herpes diagnosed?
HSV nucleic acid amplification tests (NAAT) are the most sensitive and highly specific tests available for diagnosing herpes. However, in some settings viral culture is the only test available. The sensitivity of viral culture can be low, especially among people who have recurrent or healing lesions. Because viral shedding is intermittent, it is possible for someone to have a genital herpes infection even though it was not detected by NAAT or culture. 11
Type-specific virologic tests can be used for diagnosing genital herpes when a person has recurrent symptoms or lesion without a confirmatory NAAT, culture result, or has a partner with genital herpes. Both virologic tests and type-specific serologic tests should be available in clinical settings serving patients with, or at risk for, sexually transmitted infections. 11
Given performance limitations with commercially available type-specific serologic tests (especially with low index value results [<3]), a confirmatory test (Biokit or Western Blot) with a second method should be performed before test interpretation. If confirmatory tests are unavailable, patients should be counseled about the limitations of available testing before serologic testing. Healthcare providers should also be aware that false-positive results occur. In instances of suspected recent acquisition, serologic testing within 12 weeks after acquisition may be associated with false negative test results. 11
HSV-1 serologic testing does not distinguish between oral and genital infection, and typically should not be performed for diagnosing genital HSV-1 infection. Diagnosis of genital HSV-1 infection is confirmed by virologic tests from lesions. 11
CDC does not recommend screening for HSV-1 or HSV-2 in the general population due to limitations of the type specific serologic testing. 11 Several scenarios where type-specific serologic HSV tests may be useful include:
Patients with recurrent genital symptoms or atypical symptoms and negative HSV NAAT or culture;
Patients with a clinical diagnosis of genital herpes but no laboratory confirmation; and
Patients who report having a partner with genital herpes. 11
Patients who are at higher risk of infection (e.g., presenting for an STI evaluation, especially those with multiple sex partners), and people with HIV might need to be assessed for a history of genital herpes symptoms, followed by serology testing in those with genital symptoms. 11
| [Question]
According to the article, how many new genital herpes infections are seen in the U.S in a single year?
----------
[Task Instruction]
Only use the information provided in the document.
----------
[Article]
**What is genital herpes?**
Genital herpes is a sexually transmitted disease (STD) caused by the herpes simplex virus type 1 (HSV-1) or type 2 (HSV-2).
How common is genital herpes?
Genital herpes infection is common in the United States. CDC estimated that there were 572,000 new genital herpes infections in the United States in a single year.1 Nationwide, 11.9 % of persons aged 14 to 49 years have HSV-2 infection (12.1% when adjusted for age).2 However, the prevalence of genital herpes infection is higher than that because an increasing number of genital herpes infections are caused by HSV-1. 3 Oral HSV-1 infection is typically acquired in childhood; because the prevalence of oral HSV-1 infection has declined in recent decades, people may have become more susceptible to contracting a genital herpes infection from HSV-1. 4
HSV-2 infection is more common among women than among men; the percentages of those infected during 2015-2016 were 15.9% versus 8.2% respectively, among 14 to 49 year olds. 2 This is possibly because genital infection is more easily transmitted from men to women than from women to men during penile-vaginal sex. 5 HSV-2 infection is more common among non-Hispanic blacks (34.6%) than among non-Hispanic whites (8.1%). 2 A previous analysis found that these disparities, exist even among persons with similar numbers of lifetime sexual partners. Most infected persons may be unaware of their infection; in the United States, an estimated 87.4% of 14 to 49 year olds infected with HSV-2 have never received a clinical diagnosis. 6
The age-adjusted percentage of persons in the United States infected with HSV-2 decreased from 18.0% in 1999–2000 to 12.1% in 2015-2016. 2
How do people get genital herpes?
Infections are transmitted through contact with HSV in herpes lesions, mucosal surfaces, genital secretions, or oral secretions. 5 HSV-1 and HSV-2 can be shed from normal-appearing oral or genital mucosa or skin. 7,8 Generally, a person can only get HSV-2 infection during genital contact with someone who has a genital HSV-2 infection. However, receiving oral sex from a person with an oral HSV-1 infection can result in getting a genital HSV-1 infection. 4 Transmission commonly occurs from contact with an infected partner who does not have visible lesions and who may not know that he or she is infected. 7 In persons with asymptomatic HSV-2 infections, genital HSV shedding occurs on 10.2% of days, compared to 20.1% of days among those with symptomatic infections. 8
What are the symptoms of genital herpes?
Most individuals infected with HSV are asymptomatic or have very mild symptoms that go unnoticed or are mistaken for another skin condition. 9 When symptoms do occur, herpes lesions typically appear as one or more vesicles, or small blisters, on or around the genitals, rectum or mouth. The average incubation period for an initial herpes infection is 4 days (range, 2 to 12) after exposure. 10 The vesicles break and leave painful ulcers that may take two to four weeks to heal after the initial herpes infection. 5,10 Experiencing these symptoms is referred to as having a first herpes “outbreak” or episode.
Clinical manifestations of genital herpes differ between the first and recurrent (i.e., subsequent) outbreaks. The first outbreak of herpes is often associated with a longer duration of herpetic lesions, increased viral shedding (making HSV transmission more likely) and systemic symptoms including fever, body aches, swollen lymph nodes, or headache. 5,10 Recurrent outbreaks of genital herpes are common, and many patients who recognize recurrences have prodromal symptoms, either localized genital pain, or tingling or shooting pains in the legs, hips or buttocks, which occur hours to days before the eruption of herpetic lesions. 5 Symptoms of recurrent outbreaks are typically shorter in duration and less severe than the first outbreak of genital herpes. 5 Long-term studies have indicated that the number of symptomatic recurrent outbreaks may decrease over time. 5 Recurrences and subclinical shedding are much less frequent for genital HSV-1 infection than for genital HSV-2 infection.5
What are the complications of genital herpes?
Genital herpes may cause painful genital ulcers that can be severe and persistent in persons with suppressed immune systems, such as HIV-infected persons. 5 Both HSV-1 and HSV-2 can also cause rare but serious complications such as aseptic meningitis (inflammation of the linings of the brain). 5 Development of extragenital lesions (e.g. buttocks, groin, thigh, finger, or eye) may occur during the course of infection. 5
Some persons who contract genital herpes have concerns about how it will impact their overall health, sex life, and relationships. 5,11 There can also be considerable embarrassment, shame, and stigma associated with a herpes diagnosis that can substantially interfere with a patient’s relationships. 10 Clinicians can address these concerns by encouraging patients to recognize that while herpes is not curable, it is a manageable condition. 5 Three important steps that providers can take for their newly-diagnosed patients are: giving information, providing support resources, and helping define treatment and prevention options. 12 Patients can be counseled that risk of genital herpes transmission can be reduced, but not eliminated, by disclosure of infection to sexual partners, 5 avoiding sex during a recurrent outbreak, 5 use of suppressive antiviral therapy, 5,10 and consistent condom use. 7 Since a diagnosis of genital herpes may affect perceptions about existing or future sexual relationships, it is important for patients to understand how to talk to sexual partners about STDs. One resource can be found here: www.gytnow.org/talking-to-your-partner
There are also potential complications for a pregnant woman and her newborn child. See “How does herpes infection affect a pregnant woman and her baby?” below for information about this.
What is the link between genital herpes and HIV?
Genital ulcerative disease caused by herpes makes it easier to transmit and acquire HIV infection sexually. There is an estimated 2- to 4-fold increased risk of acquiring HIV, if individuals with genital herpes infection are genitally exposed to HIV. 13-15 Ulcers or breaks in the skin or mucous membranes (lining of the mouth, vagina, and rectum) from a herpes infection may compromise the protection normally provided by the skin and mucous membranes against infections, including HIV. 14 In addition, having genital herpes increases the number of CD4 cells (the target cell for HIV entry) in the genital mucosa. In persons with both HIV and genital herpes, local activation of HIV replication at the site of genital herpes infection can increase the risk that HIV will be transmitted during contact with the mouth, vagina, or rectum of an HIV-uninfected sex partner. 14
How does genital herpes affect a pregnant woman and her baby?
Neonatal herpes is one of the most serious complications of genital herpes.5,16 Healthcare providers should ask all pregnant women if they have a history of genital herpes.11 Herpes infection can be passed from mother to child during pregnancy or childbirth, or babies may be infected shortly after birth, resulting in a potentially fatal neonatal herpes infection. 17 Infants born to women who acquire genital herpes close to the time of delivery and are shedding virus at delivery are at a much higher risk for developing neonatal herpes, compared with women who have recurrent genital herpes . 16,18-20 Thus, it is important that women avoid contracting herpes during pregnancy. Women should be counseled to abstain from intercourse during the third trimester with partners known to have or suspected of having genital herpes. 5,11
While women with genital herpes may be offered antiviral medication late in pregnancy through delivery to reduce the risk of a recurrent herpes outbreak, third trimester antiviral prophylaxis has not been shown to decrease the risk of herpes transmission to the neonate.11,21,22 Routine serologic HSV screening of pregnant women is not recommended. 11 However, at onset of labor, all women should undergo careful examination and questioning to evaluate for presence of prodromal symptoms or herpetic lesions. 11 If herpes symptoms are present a cesarean delivery is recommended to prevent HSV transmission to the infant.5,11,23 There are detailed guidelines for how to manage asymptomatic infants born to women with active genital herpes lesions.24
How is genital herpes diagnosed?
HSV nucleic acid amplification tests (NAAT) are the most sensitive and highly specific tests available for diagnosing herpes. However, in some settings viral culture is the only test available. The sensitivity of viral culture can be low, especially among people who have recurrent or healing lesions. Because viral shedding is intermittent, it is possible for someone to have a genital herpes infection even though it was not detected by NAAT or culture. 11
Type-specific virologic tests can be used for diagnosing genital herpes when a person has recurrent symptoms or lesion without a confirmatory NAAT, culture result, or has a partner with genital herpes. Both virologic tests and type-specific serologic tests should be available in clinical settings serving patients with, or at risk for, sexually transmitted infections. 11
Given performance limitations with commercially available type-specific serologic tests (especially with low index value results [<3]), a confirmatory test (Biokit or Western Blot) with a second method should be performed before test interpretation. If confirmatory tests are unavailable, patients should be counseled about the limitations of available testing before serologic testing. Healthcare providers should also be aware that false-positive results occur. In instances of suspected recent acquisition, serologic testing within 12 weeks after acquisition may be associated with false negative test results. 11
HSV-1 serologic testing does not distinguish between oral and genital infection, and typically should not be performed for diagnosing genital HSV-1 infection. Diagnosis of genital HSV-1 infection is confirmed by virologic tests from lesions. 11
CDC does not recommend screening for HSV-1 or HSV-2 in the general population due to limitations of the type specific serologic testing. 11 Several scenarios where type-specific serologic HSV tests may be useful include:
Patients with recurrent genital symptoms or atypical symptoms and negative HSV NAAT or culture;
Patients with a clinical diagnosis of genital herpes but no laboratory confirmation; and
Patients who report having a partner with genital herpes. 11
Patients who are at higher risk of infection (e.g., presenting for an STI evaluation, especially those with multiple sex partners), and people with HIV might need to be assessed for a history of genital herpes symptoms, followed by serology testing in those with genital symptoms. 11
|
Only refer to the document to answer the question. Only answer the question, do not add extra chatter or descriptions. Your answer should not be in bullet point format. | Explain what effect frequent trading of ETF Shares has on shareholders. | **Risks of Exchange-Traded Shares**
ETF Shares are not individually redeemable. They can be redeemed with
the issuing Fund at NAV only by certain authorized broker-dealers and only in
large blocks known as Creation Units. Consequently, if you want to liquidate
some or all of your ETF Shares, you must sell them on the secondary market
at prevailing market prices.
The market price of ETF Shares may differ from NAV. Although it is
expected that the market price of an ETF Share typically will approximate its
NAV, there may be times when the market price and the NAV differ
significantly. Thus, you may pay more (premium) or less (discount) than NAV
when you buy ETF Shares on the secondary market, and you may receive
more or less than NAV when you sell those shares. These discounts and
premiums are likely to be greatest during times of market disruption or
extreme market volatility.
Vanguard’s website at vanguard.com shows the previous day’s closing NAV and
closing market price for the Fund’s ETF Shares. The website also discloses, in
the Premium/Discount Analysis section of the ETF Shares’ Price & Performance
page, how frequently the Fund’s ETF Shares traded at a premium or discount to
NAV (based on closing NAVs and market prices) and the magnitudes of such
premiums and discounts.
An active trading market may not exist. Although Vanguard ETF Shares are
listed on a national securities exchange, it is possible that an active trading
market may not be maintained. Although this could happen at any time, it is
more likely to occur during times of severe market disruption. If you attempt
to sell your ETF Shares when an active trading market is not functioning, you
may have to sell at a significant discount to NAV. In extreme cases, you may
not be able to sell your shares at all.
Trading may be halted. Trading of Vanguard ETF Shares on an exchange may
be halted by the activation of individual or marketwide trading halts (which halt
trading for a specific period of time when the price of a particular security or
overall market prices decline by a specified percentage). Trading of ETF Shares
may also be halted if (1) the shares are delisted from the listing exchange
without first being listed on another exchange or (2) exchange officials
determine that such action is appropriate in the interest of a fair and orderly
market or for the protection of investors.
Conversion Privilege
Owners of conventional shares issued by the Fund may convert those shares to
ETF Shares of equivalent value of the same fund. Please note that investors who
own conventional shares through a 401(k) plan or other employer-sponsored
15
retirement or benefit plan generally may not convert those shares to ETF Shares
and should check with their plan sponsor or recordkeeper. ETF Shares, whether
acquired through a conversion or purchased on the secondary market, cannot be
converted to conventional shares by a shareholder. Also, ETF Shares of one fund
cannot be exchanged for ETF Shares of another fund.
You must hold ETF Shares in a brokerage account. Thus, before converting
conventional shares to ETF Shares, you must have an existing, or open a new,
brokerage account. This account may be with Vanguard Brokerage Services®
or
with any other brokerage firm. To initiate a conversion of conventional shares to
ETF Shares, please contact your broker.
Vanguard Brokerage Services does not impose a fee on conversions from
Vanguard conventional shares to Vanguard ETF Shares. However, other
brokerage firms may charge a fee to process a conversion. Vanguard reserves
the right, in the future, to impose a transaction fee on conversions or to limit,
temporarily suspend, or terminate the conversion privilege.
Converting conventional shares to ETF Shares is generally accomplished as
follows. First, after your broker notifies Vanguard of your request to convert,
Vanguard will transfer your conventional shares from your account to the
broker’s omnibus account with Vanguard (an account maintained by the broker
on behalf of all its customers who hold conventional Vanguard fund shares
through the broker). After the transfer, Vanguard’s records will reflect your broker,
not you, as the owner of the shares. Next, your broker will instruct Vanguard to
convert the appropriate number or dollar amount of conventional shares in its
omnibus account to ETF Shares of equivalent value, based on the respective
NAVs of the two share classes.
Your Fund’s transfer agent will reflect ownership of all ETF Shares in the name of
the Depository Trust Company (DTC). The DTC will keep track of which ETF
Shares belong to your broker, and your broker, in turn, will keep track of which
ETF Shares belong to you.
Because the DTC is unable to handle fractional shares, only whole shares can be
converted. For example, if you owned 300.25 conventional shares, and this was
equivalent in value to 90.75 ETF Shares, the DTC account would receive 90 ETF
Shares. Conventional shares with a value equal to 0.75 ETF Shares (in this
example, that would be 2.481 conventional shares) would remain in the broker’s
omnibus account with Vanguard. Your broker then could either (1) credit your
account with 0.75 ETF Shares or (2) redeem the 2.481 conventional shares for
cash at NAV and deliver that cash to your account. If your broker chose to
redeem your conventional shares, you would realize a gain or loss on the
redemption that must be reported on your tax return (unless you hold the shares
16
in an IRA or other tax-deferred account). Please consult your broker for
information on how it will handle the conversion process, including whether it
will impose a fee to process a conversion.
If you convert your conventional shares to ETF Shares through Vanguard
Brokerage Services, all conventional shares for which you request conversion
will be converted to ETF Shares of equivalent value. Because no fractional
shares will have to be sold, the transaction will not be taxable.
Here are some important points to keep in mind when converting conventional
shares of a Vanguard fund to ETF Shares:
• The conversion process can take anywhere from several days to several
weeks, depending on your broker. Vanguard generally will process conversion
requests either on the day they are received or on the next business day.
Vanguard imposes conversion blackout windows around the dates when a fund
with ETF Shares declares dividends. This is necessary to prevent a shareholder
from collecting a dividend from both the conventional share class currently held
and also from the ETF share class to which the shares will be converted.
• Until the conversion process is complete, you will remain fully invested in a
fund’s conventional shares, and your investment will increase or decrease in
value in tandem with the NAV of those shares.
• The conversion transaction is nontaxable except, if applicable, to the very
limited extent previously described.
Shareholder Rights
The Fund’s Agreement and Declaration of Trust, as amended, requires a
shareholder bringing a derivative action on behalf of Vanguard Index Funds (the
Trust) that is subject to a pre-suit demand to collectively hold at least 10% of the
outstanding shares of the Trust or at least 10% of the outstanding shares of the
series or class to which the demand relates and to undertake to reimburse the
Trust for the expense of any counsel or advisors used when considering the
merits of the demand in the event that the board of trustees determines not to
bring such action. In each case, these requirements do not apply to claims
arising under the federal securities laws to the extent that any such federal
securities laws, rules, or regulations do not permit such application.
A precautionary note to investment companies: The Fund’s ETF Shares are
issued by a registered investment company, and therefore the acquisition of
such shares by other investment companies and private funds is subject to the
restrictions of Section 12(d)(1) of the Investment Company Act of 1940 (the 1940
Act). SEC Rule 12d1-4 under the 1940 Act permits registered investment
companies to invest in other registered investment companies beyond the limits
17
in Section 12(d)(1), subject to certain conditions, including that funds with
different investment advisors must enter into a fund of funds
investment agreement.
Frequent Trading and Market-Timing
Unlike frequent trading of a Vanguard fund’s conventional (i.e., not
exchange-traded) classes of shares, frequent trading of ETF Shares does not
disrupt portfolio management or otherwise harm fund shareholders. The vast
majority of trading in ETF Shares occurs on the secondary market. Because
these trades do not involve the issuing fund, they do not harm the fund or its
shareholders. Certain broker-dealers are authorized to purchase and redeem ETF
Shares directly with the issuing fund. Because these trades typically are effected
in kind (i.e., for securities and not for cash), or are assessed a transaction fee
when effected in cash, they do not cause any of the harmful effects to the
issuing fund (as previously noted) that may result from frequent trading. For
these reasons, the board of trustees of each fund that issues ETF Shares has
determined that it is not necessary to adopt policies and procedures to detect
and deter frequent trading and market-timing of ETF Shares.
Portfolio Holdings
Please consult the Fund’s Statement of Additional Information or our website for
a description of the policies and procedures that govern disclosure of the Fund’s
portfolio holdings.
Turnover Rate
Although the Fund generally seeks to invest for the long term, it may sell
securities regardless of how long they have been held. Generally, an index fund
sells securities in response to redemption requests from shareholders of
conventional (i.e., not exchange-traded) shares or to changes in the composition
of its target index. Turnover rates for large-cap stock index funds tend to be low
because large-cap indexes—such as the S&P 500 Index—typically do not change
significantly from year to year. The Financial Highlights section of this
prospectus shows historical turnover rates for the Fund. A turnover rate of
100%, for example, would mean that the Fund had sold and replaced securities
valued at 100% of its net assets within a one-year period. In general, the greater
the turnover rate, the greater the impact transaction costs will have on a fund’s
return. Also, funds with high turnover rates may be more likely to generate
capital gains, including short-term capital gains, that must be distributed to
shareholders and will be taxable to shareholders investing through a
taxable account. | [article]
==========
**Risks of Exchange-Traded Shares**
ETF Shares are not individually redeemable. They can be redeemed with
the issuing Fund at NAV only by certain authorized broker-dealers and only in
large blocks known as Creation Units. Consequently, if you want to liquidate
some or all of your ETF Shares, you must sell them on the secondary market
at prevailing market prices.
The market price of ETF Shares may differ from NAV. Although it is
expected that the market price of an ETF Share typically will approximate its
NAV, there may be times when the market price and the NAV differ
significantly. Thus, you may pay more (premium) or less (discount) than NAV
when you buy ETF Shares on the secondary market, and you may receive
more or less than NAV when you sell those shares. These discounts and
premiums are likely to be greatest during times of market disruption or
extreme market volatility.
Vanguard’s website at vanguard.com shows the previous day’s closing NAV and
closing market price for the Fund’s ETF Shares. The website also discloses, in
the Premium/Discount Analysis section of the ETF Shares’ Price & Performance
page, how frequently the Fund’s ETF Shares traded at a premium or discount to
NAV (based on closing NAVs and market prices) and the magnitudes of such
premiums and discounts.
An active trading market may not exist. Although Vanguard ETF Shares are
listed on a national securities exchange, it is possible that an active trading
market may not be maintained. Although this could happen at any time, it is
more likely to occur during times of severe market disruption. If you attempt
to sell your ETF Shares when an active trading market is not functioning, you
may have to sell at a significant discount to NAV. In extreme cases, you may
not be able to sell your shares at all.
Trading may be halted. Trading of Vanguard ETF Shares on an exchange may
be halted by the activation of individual or marketwide trading halts (which halt
trading for a specific period of time when the price of a particular security or
overall market prices decline by a specified percentage). Trading of ETF Shares
may also be halted if (1) the shares are delisted from the listing exchange
without first being listed on another exchange or (2) exchange officials
determine that such action is appropriate in the interest of a fair and orderly
market or for the protection of investors.
Conversion Privilege
Owners of conventional shares issued by the Fund may convert those shares to
ETF Shares of equivalent value of the same fund. Please note that investors who
own conventional shares through a 401(k) plan or other employer-sponsored
15
retirement or benefit plan generally may not convert those shares to ETF Shares
and should check with their plan sponsor or recordkeeper. ETF Shares, whether
acquired through a conversion or purchased on the secondary market, cannot be
converted to conventional shares by a shareholder. Also, ETF Shares of one fund
cannot be exchanged for ETF Shares of another fund.
You must hold ETF Shares in a brokerage account. Thus, before converting
conventional shares to ETF Shares, you must have an existing, or open a new,
brokerage account. This account may be with Vanguard Brokerage Services®
or
with any other brokerage firm. To initiate a conversion of conventional shares to
ETF Shares, please contact your broker.
Vanguard Brokerage Services does not impose a fee on conversions from
Vanguard conventional shares to Vanguard ETF Shares. However, other
brokerage firms may charge a fee to process a conversion. Vanguard reserves
the right, in the future, to impose a transaction fee on conversions or to limit,
temporarily suspend, or terminate the conversion privilege.
Converting conventional shares to ETF Shares is generally accomplished as
follows. First, after your broker notifies Vanguard of your request to convert,
Vanguard will transfer your conventional shares from your account to the
broker’s omnibus account with Vanguard (an account maintained by the broker
on behalf of all its customers who hold conventional Vanguard fund shares
through the broker). After the transfer, Vanguard’s records will reflect your broker,
not you, as the owner of the shares. Next, your broker will instruct Vanguard to
convert the appropriate number or dollar amount of conventional shares in its
omnibus account to ETF Shares of equivalent value, based on the respective
NAVs of the two share classes.
Your Fund’s transfer agent will reflect ownership of all ETF Shares in the name of
the Depository Trust Company (DTC). The DTC will keep track of which ETF
Shares belong to your broker, and your broker, in turn, will keep track of which
ETF Shares belong to you.
Because the DTC is unable to handle fractional shares, only whole shares can be
converted. For example, if you owned 300.25 conventional shares, and this was
equivalent in value to 90.75 ETF Shares, the DTC account would receive 90 ETF
Shares. Conventional shares with a value equal to 0.75 ETF Shares (in this
example, that would be 2.481 conventional shares) would remain in the broker’s
omnibus account with Vanguard. Your broker then could either (1) credit your
account with 0.75 ETF Shares or (2) redeem the 2.481 conventional shares for
cash at NAV and deliver that cash to your account. If your broker chose to
redeem your conventional shares, you would realize a gain or loss on the
redemption that must be reported on your tax return (unless you hold the shares
16
in an IRA or other tax-deferred account). Please consult your broker for
information on how it will handle the conversion process, including whether it
will impose a fee to process a conversion.
If you convert your conventional shares to ETF Shares through Vanguard
Brokerage Services, all conventional shares for which you request conversion
will be converted to ETF Shares of equivalent value. Because no fractional
shares will have to be sold, the transaction will not be taxable.
Here are some important points to keep in mind when converting conventional
shares of a Vanguard fund to ETF Shares:
• The conversion process can take anywhere from several days to several
weeks, depending on your broker. Vanguard generally will process conversion
requests either on the day they are received or on the next business day.
Vanguard imposes conversion blackout windows around the dates when a fund
with ETF Shares declares dividends. This is necessary to prevent a shareholder
from collecting a dividend from both the conventional share class currently held
and also from the ETF share class to which the shares will be converted.
• Until the conversion process is complete, you will remain fully invested in a
fund’s conventional shares, and your investment will increase or decrease in
value in tandem with the NAV of those shares.
• The conversion transaction is nontaxable except, if applicable, to the very
limited extent previously described.
Shareholder Rights
The Fund’s Agreement and Declaration of Trust, as amended, requires a
shareholder bringing a derivative action on behalf of Vanguard Index Funds (the
Trust) that is subject to a pre-suit demand to collectively hold at least 10% of the
outstanding shares of the Trust or at least 10% of the outstanding shares of the
series or class to which the demand relates and to undertake to reimburse the
Trust for the expense of any counsel or advisors used when considering the
merits of the demand in the event that the board of trustees determines not to
bring such action. In each case, these requirements do not apply to claims
arising under the federal securities laws to the extent that any such federal
securities laws, rules, or regulations do not permit such application.
A precautionary note to investment companies: The Fund’s ETF Shares are
issued by a registered investment company, and therefore the acquisition of
such shares by other investment companies and private funds is subject to the
restrictions of Section 12(d)(1) of the Investment Company Act of 1940 (the 1940
Act). SEC Rule 12d1-4 under the 1940 Act permits registered investment
companies to invest in other registered investment companies beyond the limits
17
in Section 12(d)(1), subject to certain conditions, including that funds with
different investment advisors must enter into a fund of funds
investment agreement.
Frequent Trading and Market-Timing
Unlike frequent trading of a Vanguard fund’s conventional (i.e., not
exchange-traded) classes of shares, frequent trading of ETF Shares does not
disrupt portfolio management or otherwise harm fund shareholders. The vast
majority of trading in ETF Shares occurs on the secondary market. Because
these trades do not involve the issuing fund, they do not harm the fund or its
shareholders. Certain broker-dealers are authorized to purchase and redeem ETF
Shares directly with the issuing fund. Because these trades typically are effected
in kind (i.e., for securities and not for cash), or are assessed a transaction fee
when effected in cash, they do not cause any of the harmful effects to the
issuing fund (as previously noted) that may result from frequent trading. For
these reasons, the board of trustees of each fund that issues ETF Shares has
determined that it is not necessary to adopt policies and procedures to detect
and deter frequent trading and market-timing of ETF Shares.
Portfolio Holdings
Please consult the Fund’s Statement of Additional Information or our website for
a description of the policies and procedures that govern disclosure of the Fund’s
portfolio holdings.
Turnover Rate
Although the Fund generally seeks to invest for the long term, it may sell
securities regardless of how long they have been held. Generally, an index fund
sells securities in response to redemption requests from shareholders of
conventional (i.e., not exchange-traded) shares or to changes in the composition
of its target index. Turnover rates for large-cap stock index funds tend to be low
because large-cap indexes—such as the S&P 500 Index—typically do not change
significantly from year to year. The Financial Highlights section of this
prospectus shows historical turnover rates for the Fund. A turnover rate of
100%, for example, would mean that the Fund had sold and replaced securities
valued at 100% of its net assets within a one-year period. In general, the greater
the turnover rate, the greater the impact transaction costs will have on a fund’s
return. Also, funds with high turnover rates may be more likely to generate
capital gains, including short-term capital gains, that must be distributed to
shareholders and will be taxable to shareholders investing through a
taxable account.
----------------
[query]
==========
Explain what effect frequent trading of ETF Shares has on shareholders.
----------------
[task]
==========
Only refer to the document to answer the question. Only answer the question, do not add extra chatter or descriptions. Your answer should not be in bullet point format. |
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | I am an adult currently on ADHD medication. I am thinking about getting pregnant but am not sure if my ADHD medication would need any adjustments. If I want to continue taking Adderall, what are some potential benefits or detriments that I should be aware of? Use under 400 words. | This sheet is about exposure to dextroamphetamine-amphetamine in pregnancy and while breastfeeding. This information is based on available published literature. It should not take the place of medical care and advice from your healthcare provider.
What is dextroamphetamine-amphetamine?
Dextroamphetamine-amphetamine (Adderall®) is a combination prescription medication that has been used to treat attention deficit hyperactive disorder (ADHD) and narcolepsy (a condition that affects the brain's ability to control sleeping and waking up).
Sometimes when people find out they are pregnant, they think about changing how they take their medication, or stopping their medication altogether. However, it is important to talk with your healthcare providers before making any changes to how you take your medication. Stopping this medication suddenly can cause withdrawal in some people. It is not known if or how withdrawal may affect a pregnancy. If you are going to stop using this medication, your healthcare providers may talk with you about slowly reducing your dose over time. Your healthcare providers can also talk with you about the benefits of treating your condition and the risks of untreated illness during pregnancy.
Dextroamphetamine-amphetamine is different from methamphetamine. MotherToBaby has a fact sheet on methamphetamine here: https://mothertobaby.org/fact-sheets/methamphetamine/. This sheet will focus on the use of dextroamphetamine-amphetamine under medical supervision. MotherToBaby has a fact sheet on dextroamphetamine here: https://mothertobaby.org/fact-sheets/dextroamphetamine-pregnancy/.
I take dextroamphetamine-amphetamine. Can it make it harder for me to get pregnant?
Taking prescribed dextroamphetamine-amphetamine as directed by your healthcare provider is not expected to make it harder to get pregnant.
Does taking dextroamphetamine-amphetamine increase the chance of miscarriage?
Miscarriage is common and can occur in any pregnancy for many different reasons. Taking prescribed dextroamphetamine-amphetamine as directed by your healthcare provider is not expected to increase the chance of miscarriage.
Does taking dextroamphetamine-amphetamine increase the chance of birth defects?
Every pregnancy starts out with a 3-5% chance of having a birth defect. This is called the background risk. Most studies suggest that taking dextroamphetamine or amphetamine during the first trimester does not increase the chance of birth defects. In a large study of people taking stimulants for ADHD during pregnancy, there was no increased risk for birth defects reported when taking amphetamines, such as dextroamphetamine-amphetamine, for ADHD treatment.
Does taking dextroamphetamine-amphetamine in pregnancy increase the chance of other pregnancy-related problems?
Although data is limited, when used as directed by a healthcare provider, taking dextroamphetamine-amphetamine during pregnancy has sometimes been associated with a higher chance of pregnancy-related problems, such as poor growth (babies born small and/or with a small head size), low birth weight (weighing less than 5 pounds, 8 ounces [2500 grams] at birth), or preterm delivery (birth before week 37). People taking dextroamphetamine-amphetamine may experience side effects from their medication, such as weight loss due to decreased appetite, changes in heart rate, and changes in blood pressure. Talk with your healthcare provider about monitoring these side effects to help improve outcomes for you and your baby.
I need to take dextroamphetamine-amphetamine throughout my entire pregnancy. Will it cause withdrawal symptoms in my baby after birth?
It is not known if taking dextroamphetamine-amphetamine could cause withdrawal symptoms in a newborn after birth. This has not been well studied in people only taking dextroamphetamine-amphetamine as directed during pregnancy.
Does taking dextroamphetamine-amphetamine in pregnancy affect future behavior or learning for the child?
Although limited by looking at all ADHD medications together, a Danish study suggested no increase in neurodevelopmental disorders, like ADHD, in the children of people who continued their ADHD medication during pregnancy versus those who stopped their medication before becoming pregnant.
Breastfeeding while taking dextroamphetamine-amphetamine:
There are no studies on the combination of amphetamine-dextroamphetamine in breastfeeding. Individually, amphetamine and dextroamphetamine have been found to pass into breast milk. The effect of amphetamine in milk on behavior and brain development of infants has not been well studied. No adverse effects were reported in 4 infants (ages range from 3 months to 10 months) whose mothers were taking dextroamphetamine for ADHD. If you suspect the baby has any symptoms such as trouble eating, trouble sleeping, or irritability, contact the child’s healthcare provider.
Some evidence suggests that large doses of dextroamphetamine could lower milk supply in people who are newly breastfeeding. If you have any questions or concerns about breastfeeding, talk with your healthcare provider, your baby’s pediatrician, or a lactation consultant.
The product label for dextroamphetamine-amphetamine recommends people who are breastfeeding not use this medication. But the benefit of using dextroamphetamine-amphetamine may outweigh possible risks. Your healthcare providers can talk with you about using dextroamphetamine-amphetamine and what treatment is best for you. Be sure to talk to your healthcare provider about all your breastfeeding questions.
If a male takes dextroamphetamine-amphetamine, could it affect fertility or increase the chance of birth defects?
It is not known if dextroamphetamine-amphetamine could affect male fertility (make it harder to get a partner pregnant) or increase the chance of birth defects above the background risk. In general, exposures that fathers or sperm donors have are unlikely to increase risks to a pregnancy. For more information, please see the MotherToBaby fact sheet Paternal Exposures at https://mothertobaby.org/fact-sheets/paternal-exposures-pregnancy/. | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
I am an adult currently on ADHD medication. I am thinking about getting pregnant but am not sure if my ADHD medication would need any adjustments. If I want to continue taking Adderall, what are some potential benefits or detriments that I should be aware of? Use under 400 words.
This sheet is about exposure to dextroamphetamine-amphetamine in pregnancy and while breastfeeding. This information is based on available published literature. It should not take the place of medical care and advice from your healthcare provider.
What is dextroamphetamine-amphetamine?
Dextroamphetamine-amphetamine (Adderall®) is a combination prescription medication that has been used to treat attention deficit hyperactive disorder (ADHD) and narcolepsy (a condition that affects the brain's ability to control sleeping and waking up).
Sometimes when people find out they are pregnant, they think about changing how they take their medication, or stopping their medication altogether. However, it is important to talk with your healthcare providers before making any changes to how you take your medication. Stopping this medication suddenly can cause withdrawal in some people. It is not known if or how withdrawal may affect a pregnancy. If you are going to stop using this medication, your healthcare providers may talk with you about slowly reducing your dose over time. Your healthcare providers can also talk with you about the benefits of treating your condition and the risks of untreated illness during pregnancy.
Dextroamphetamine-amphetamine is different from methamphetamine. MotherToBaby has a fact sheet on methamphetamine here: https://mothertobaby.org/fact-sheets/methamphetamine/. This sheet will focus on the use of dextroamphetamine-amphetamine under medical supervision. MotherToBaby has a fact sheet on dextroamphetamine here: https://mothertobaby.org/fact-sheets/dextroamphetamine-pregnancy/.
I take dextroamphetamine-amphetamine. Can it make it harder for me to get pregnant?
Taking prescribed dextroamphetamine-amphetamine as directed by your healthcare provider is not expected to make it harder to get pregnant.
Does taking dextroamphetamine-amphetamine increase the chance of miscarriage?
Miscarriage is common and can occur in any pregnancy for many different reasons. Taking prescribed dextroamphetamine-amphetamine as directed by your healthcare provider is not expected to increase the chance of miscarriage.
Does taking dextroamphetamine-amphetamine increase the chance of birth defects?
Every pregnancy starts out with a 3-5% chance of having a birth defect. This is called the background risk. Most studies suggest that taking dextroamphetamine or amphetamine during the first trimester does not increase the chance of birth defects. In a large study of people taking stimulants for ADHD during pregnancy, there was no increased risk for birth defects reported when taking amphetamines, such as dextroamphetamine-amphetamine, for ADHD treatment.
Does taking dextroamphetamine-amphetamine in pregnancy increase the chance of other pregnancy-related problems?
Although data is limited, when used as directed by a healthcare provider, taking dextroamphetamine-amphetamine during pregnancy has sometimes been associated with a higher chance of pregnancy-related problems, such as poor growth (babies born small and/or with a small head size), low birth weight (weighing less than 5 pounds, 8 ounces [2500 grams] at birth), or preterm delivery (birth before week 37). People taking dextroamphetamine-amphetamine may experience side effects from their medication, such as weight loss due to decreased appetite, changes in heart rate, and changes in blood pressure. Talk with your healthcare provider about monitoring these side effects to help improve outcomes for you and your baby.
I need to take dextroamphetamine-amphetamine throughout my entire pregnancy. Will it cause withdrawal symptoms in my baby after birth?
It is not known if taking dextroamphetamine-amphetamine could cause withdrawal symptoms in a newborn after birth. This has not been well studied in people only taking dextroamphetamine-amphetamine as directed during pregnancy.
Does taking dextroamphetamine-amphetamine in pregnancy affect future behavior or learning for the child?
Although limited by looking at all ADHD medications together, a Danish study suggested no increase in neurodevelopmental disorders, like ADHD, in the children of people who continued their ADHD medication during pregnancy versus those who stopped their medication before becoming pregnant.
Breastfeeding while taking dextroamphetamine-amphetamine:
There are no studies on the combination of amphetamine-dextroamphetamine in breastfeeding. Individually, amphetamine and dextroamphetamine have been found to pass into breast milk. The effect of amphetamine in milk on behavior and brain development of infants has not been well studied. No adverse effects were reported in 4 infants (ages range from 3 months to 10 months) whose mothers were taking dextroamphetamine for ADHD. If you suspect the baby has any symptoms such as trouble eating, trouble sleeping, or irritability, contact the child’s healthcare provider.
Some evidence suggests that large doses of dextroamphetamine could lower milk supply in people who are newly breastfeeding. If you have any questions or concerns about breastfeeding, talk with your healthcare provider, your baby’s pediatrician, or a lactation consultant.
The product label for dextroamphetamine-amphetamine recommends people who are breastfeeding not use this medication. But the benefit of using dextroamphetamine-amphetamine may outweigh possible risks. Your healthcare providers can talk with you about using dextroamphetamine-amphetamine and what treatment is best for you. Be sure to talk to your healthcare provider about all your breastfeeding questions.
If a male takes dextroamphetamine-amphetamine, could it affect fertility or increase the chance of birth defects?
It is not known if dextroamphetamine-amphetamine could affect male fertility (make it harder to get a partner pregnant) or increase the chance of birth defects above the background risk. In general, exposures that fathers or sperm donors have are unlikely to increase risks to a pregnancy. For more information, please see the MotherToBaby fact sheet Paternal Exposures at https://mothertobaby.org/fact-sheets/paternal-exposures-pregnancy/.
https://www.ncbi.nlm.nih.gov/books/NBK603254/ |
To answer, forget everything you know and use only the information I provide in the context block. Provide your answer in 150 words or less. If you cannot answer using the context alone, say "I cannot determine the answer to that due to lack of context." | Aside from the potential loss of revenue from home distillation and the known dangers of consuming contaminated alcohol in homemade spirits, why do governments continue to prohibit the legal home production of distilled alcoholic beverages? | The dollar figures involved are informative. When alcohol is made
on a large scale, as it is for the fuel-alcohol industry (gasohol) its cost of
manufacture is about 25 cents per litre. This is for 100% alcohol. If diluted
to the 40% commonly used for vodka, gin and other distilled spirits a litre
would contain about 10 cents worth of alcohol. The retail price of a litre of
vodka will lie somewhere between $10 and $20 depending on the country
and the level of taxation. Some of the difference is due to the scale of
manufacture, the purity of the product, transportation, the profit margin, etc.
but even allowing for these factors the tax burden on the consumer is
extremely high. Is it any wonder that an unscrupulous operator will attempt
to sell his alcohol direct to the consumer, perhaps at half the normal retail
price which would still give him a very handsome profit? Or is it any
wonder that the authorities crack down hard on anyone attempting to
interfere with their huge source of revenue, their milch cow?
This battle between illicit alcohol producers (moon-shiners) or
importers (smugglers) and the authorities has now become the stuff of
legend. Consider the number of stories written or movies made about
desperate men rolling barrels of rum up a beach at midnight! Or about the
battles between gangsters and police during prohibition days in the United
States! Unfortunately, such stories have been taken too much to heart by the
general public so that the whole idea of distillation, and the spirits made by
this process, is now perceived as being inherently more wicked than the
gentle art of beer- or wine-making. And the “wickedness” is a strong
deterrent to most people.
18
It is understandable why a government would wish to put a stop to
smuggling and moonshining for commercial purposes, that is to say in order
to re-sell the product and avoid the payment of taxes. But why would there
be a complete ban on distillation by amateurs, on a small scale and for their
own use? At the risk of being tediously repetitious it is worth reminding
ourselves again (and again) that distillation is one of the most innocuous
activities imaginable. It doesn't produce a drop of alcohol. Not a drop.
What it does is take the beer which you have quite legally made by
fermentation and remove all the noxious, poisonous substances which
appear inevitably as by-products in all fermentations. Far from making
alcohol, a little will actually be lost during this purification process. Instead
of prohibiting it, the authorities should really be encouraging distillation by
amateurs. And the general public, which is so rightly health-conscious these
days, would be more than justified in demanding the right to do so.
In attempting to find the reason for governments to ban the
purification of beer or wine by distillation the first thing which comes to
mind is the potential loss of revenue. After all, if everyone started making
their own spirits at home the loss of revenue could be considerable. But this
cannot be the real reason because the home production of beer and wine for
one's own use is legal, and both are taxable when sold commercially, so the
authorities must not be all that concerned about the loss of revenue when
people make their own alcoholic beverages.
A possible, and somewhat cynical, explanation for the prohibition of
home distillation is based on the following reasoning: Home-made beer and
wine are usually so inferior to a good commercial product that only the most
dedicated amateurs will go to the trouble of first making and then drinking
such doubtful concoctions. Consequently, there is no real threat to the sale
of commercial products nor to the revenues generated by taxation. If,
however, home distillation were permitted, every Tom, Dick and Harriette
would be in a position to make a gin or vodka which was every bit as good
as the finest commercial product on the market. This could, it might be
argued, make serious inroads into commercial sales and into government
revenues.
Further thought, however, makes it very unlikely that amateur
production of spirits would have any appreciable effect on commercial sales.
For one thing the equipment is moderately expensive and it is necessary to
follow directions rather carefully when using it so it is unlikely that the
19
practice would ever become really widespread. Moreover, many people
prefer scotch, rye, rum, etc. to gin and vodka and it is only the latter which
can be made safely and effectively by the amateur. So, if distillation were
legalized for amateurs, it would probably become nothing more than an
interesting hobby, just like making wine, and offer little competition to
commercial producers.
No, we have to look deeper than this in our search for a reason why
governments have a hang-up about distillation. You see, it is not just
amateurs who are penalized. Commercial producers also feel the heavy hand
of government prejudice and disapproval. This is illustrated by several
restrictions which apply in many countries. One is the fact that the
advertising of beer and wine on television is permitted whereas the
advertising of distilled spirits is prohibited. Another concerns the tax
imposed on distilled alcoholic products --- per unit of alcohol the tax on the
distilled product is much higher than it is on beer and wine. A third
restriction on spirits can be seen in the alcoholic beverage section of
supermarkets ---- beer and wine are sold, and possibly fortified wines such
as vermouth, but raise the alcohol concentration to 40% and the ancient
shibboleth of 'hard spirits' reigns supreme. This is grossly unfair
discrimination and naturally of great concern to distillers. As they point out,
a glass of gin and tonic, a glass of wine, and a bottle of beer all contain
similar amounts of alcohol, so it is inequitable to tax their product at a
higher level.
So just why is there this official discrimination against distilled
alcoholic beverages? Irrational attitudes are always difficult to deal with,
but in order to reform the law we have to deal with it, and this requires that
we try to understand the thinking behind it. The drug involved is ethyl
alcohol, an acknowledged mood-modifier, but ethyl alcohol itself is not
singled out by governments as being the bad actor. The alcohol in beer,
wine and gin are identical and imbibed in similar quantities will have
identical effects in terms of mood modification. No, apparently distillation
per se is perceived as evil, to the point where even owning the equipment is
illegal.
There is only one explanation which seems to fit all the facts and this
is that governments and their officials fail to make a distinction between
concentration and amount. Actually, quite a lot of people have this problem.
Just because beer has 5% alcohol and gin has 40% does not mean that the
20
gin-drinker is eight times more likely to over-indulge than the beer drinker.
The fact of the matter is that anti-social behaviour such as hooliganism at
sporting events is invariably caused by beer drinkers. And many studies of
drinking and driving have shown that the vast majority of those pulled over
have been drinking beer, not spirits. People drink until they've had enough,
or feel in a certain mood, and if this takes five, ten, or even more beers then
that is the number which will be drunk. It is the testosterone concentration
which causes the problem, not the alcohol concentration.
A few attempts have been made to dig deeper into the reasons behind
the official attitude to distillation but it is a frustrating experience.
Invariably the person spoken to seems bewildered by the question, almost as
though one had asked why it was illegal to murder someone. One individual
explained patiently and kindly that it was because the law is the law.
Another made the extraordinary statement that distillation was prohibited
because it makes alcohol and this is illegal. (Of course distillation does not
make alcohol. Alcohol is made by fermentation, not by distillation, and in
any case fermentation to make beer and wine for one's own consumption is
completely legal). | Aside from the potential loss of revenue from home distillation and the known dangers of consuming contaminated alcohol in homemade spirits, why do governments continue to prohibit the legal home production of distilled alcoholic beverages? To answer, forget everything you know and use only the information I provide in the context block. Provide your answer in 150 words or less. If you cannot answer using the context alone, say "I cannot determine the answer to that due to lack of context."
Context: The dollar figures involved are informative. When alcohol is made
on a large scale, as it is for the fuel-alcohol industry (gasohol) its cost of
manufacture is about 25 cents per litre. This is for 100% alcohol. If diluted
to the 40% commonly used for vodka, gin and other distilled spirits a litre
would contain about 10 cents worth of alcohol. The retail price of a litre of
vodka will lie somewhere between $10 and $20 depending on the country
and the level of taxation. Some of the difference is due to the scale of
manufacture, the purity of the product, transportation, the profit margin, etc.
but even allowing for these factors the tax burden on the consumer is
extremely high. Is it any wonder that an unscrupulous operator will attempt
to sell his alcohol direct to the consumer, perhaps at half the normal retail
price which would still give him a very handsome profit? Or is it any
wonder that the authorities crack down hard on anyone attempting to
interfere with their huge source of revenue, their milch cow?
This battle between illicit alcohol producers (moon-shiners) or
importers (smugglers) and the authorities has now become the stuff of
legend. Consider the number of stories written or movies made about
desperate men rolling barrels of rum up a beach at midnight! Or about the
battles between gangsters and police during prohibition days in the United
States! Unfortunately, such stories have been taken too much to heart by the
general public so that the whole idea of distillation, and the spirits made by
this process, is now perceived as being inherently more wicked than the
gentle art of beer- or wine-making. And the “wickedness” is a strong
deterrent to most people.
18
It is understandable why a government would wish to put a stop to
smuggling and moonshining for commercial purposes, that is to say in order
to re-sell the product and avoid the payment of taxes. But why would there
be a complete ban on distillation by amateurs, on a small scale and for their
own use? At the risk of being tediously repetitious it is worth reminding
ourselves again (and again) that distillation is one of the most innocuous
activities imaginable. It doesn't produce a drop of alcohol. Not a drop.
What it does is take the beer which you have quite legally made by
fermentation and remove all the noxious, poisonous substances which
appear inevitably as by-products in all fermentations. Far from making
alcohol, a little will actually be lost during this purification process. Instead
of prohibiting it, the authorities should really be encouraging distillation by
amateurs. And the general public, which is so rightly health-conscious these
days, would be more than justified in demanding the right to do so.
In attempting to find the reason for governments to ban the
purification of beer or wine by distillation the first thing which comes to
mind is the potential loss of revenue. After all, if everyone started making
their own spirits at home the loss of revenue could be considerable. But this
cannot be the real reason because the home production of beer and wine for
one's own use is legal, and both are taxable when sold commercially, so the
authorities must not be all that concerned about the loss of revenue when
people make their own alcoholic beverages.
A possible, and somewhat cynical, explanation for the prohibition of
home distillation is based on the following reasoning: Home-made beer and
wine are usually so inferior to a good commercial product that only the most
dedicated amateurs will go to the trouble of first making and then drinking
such doubtful concoctions. Consequently, there is no real threat to the sale
of commercial products nor to the revenues generated by taxation. If,
however, home distillation were permitted, every Tom, Dick and Harriette
would be in a position to make a gin or vodka which was every bit as good
as the finest commercial product on the market. This could, it might be
argued, make serious inroads into commercial sales and into government
revenues.
Further thought, however, makes it very unlikely that amateur
production of spirits would have any appreciable effect on commercial sales.
For one thing the equipment is moderately expensive and it is necessary to
follow directions rather carefully when using it so it is unlikely that the
19
practice would ever become really widespread. Moreover, many people
prefer scotch, rye, rum, etc. to gin and vodka and it is only the latter which
can be made safely and effectively by the amateur. So, if distillation were
legalized for amateurs, it would probably become nothing more than an
interesting hobby, just like making wine, and offer little competition to
commercial producers.
No, we have to look deeper than this in our search for a reason why
governments have a hang-up about distillation. You see, it is not just
amateurs who are penalized. Commercial producers also feel the heavy hand
of government prejudice and disapproval. This is illustrated by several
restrictions which apply in many countries. One is the fact that the
advertising of beer and wine on television is permitted whereas the
advertising of distilled spirits is prohibited. Another concerns the tax
imposed on distilled alcoholic products --- per unit of alcohol the tax on the
distilled product is much higher than it is on beer and wine. A third
restriction on spirits can be seen in the alcoholic beverage section of
supermarkets ---- beer and wine are sold, and possibly fortified wines such
as vermouth, but raise the alcohol concentration to 40% and the ancient
shibboleth of 'hard spirits' reigns supreme. This is grossly unfair
discrimination and naturally of great concern to distillers. As they point out,
a glass of gin and tonic, a glass of wine, and a bottle of beer all contain
similar amounts of alcohol, so it is inequitable to tax their product at a
higher level.
So just why is there this official discrimination against distilled
alcoholic beverages? Irrational attitudes are always difficult to deal with,
but in order to reform the law we have to deal with it, and this requires that
we try to understand the thinking behind it. The drug involved is ethyl
alcohol, an acknowledged mood-modifier, but ethyl alcohol itself is not
singled out by governments as being the bad actor. The alcohol in beer,
wine and gin are identical and imbibed in similar quantities will have
identical effects in terms of mood modification. No, apparently distillation
per se is perceived as evil, to the point where even owning the equipment is
illegal.
There is only one explanation which seems to fit all the facts and this
is that governments and their officials fail to make a distinction between
concentration and amount. Actually, quite a lot of people have this problem.
Just because beer has 5% alcohol and gin has 40% does not mean that the
20
gin-drinker is eight times more likely to over-indulge than the beer drinker.
The fact of the matter is that anti-social behaviour such as hooliganism at
sporting events is invariably caused by beer drinkers. And many studies of
drinking and driving have shown that the vast majority of those pulled over
have been drinking beer, not spirits. People drink until they've had enough,
or feel in a certain mood, and if this takes five, ten, or even more beers then
that is the number which will be drunk. It is the testosterone concentration
which causes the problem, not the alcohol concentration.
A few attempts have been made to dig deeper into the reasons behind
the official attitude to distillation but it is a frustrating experience.
Invariably the person spoken to seems bewildered by the question, almost as
though one had asked why it was illegal to murder someone. One individual
explained patiently and kindly that it was because the law is the law.
Another made the extraordinary statement that distillation was prohibited
because it makes alcohol and this is illegal. (Of course distillation does not
make alcohol. Alcohol is made by fermentation, not by distillation, and in
any case fermentation to make beer and wine for one's own consumption is
completely legal). |
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. Do not use bullet points. Limit your response to 100 words. | How do regulators' powers achieve their goals? | Regulatory Powers
Regulators implement policy using their powers, which vary by agency. Powers, which can also
be thought of as tools, can be grouped into a few broad categories:
• Licensing, chartering, or registration. A starting point for understanding the
regulatory system is that most activities cannot be undertaken unless a firm,
individual, or market has received the proper credentials from the appropriate
state or federal regulator. Each type of charter, license, or registration granted by
the respective regulator governs the sets of financial activities that the holder is
permitted to engage in. For example, a firm cannot accept federally insured
3 U.S. Government Accountability Office (GAO), Financial Regulation, GAO-16-175, February 2016,
https://www.gao.gov/assets/680/675400.pdf.
Who Regulates Whom? An Overview of the U.S. Financial Regulatory Framework
Congressional Research Service 3
deposits unless it is chartered as a bank, thrift, or credit union by a depository
institution regulator. Likewise, an individual generally cannot buy and sell
securities to others unless licensed as a broker-dealer.4 To be granted a license,
charter, or registration, the recipient must accept the terms and conditions that
accompany it. Depending on the type, those conditions could include regulatory
oversight, training requirements, and a requirement to act according to a set of
standards or code of ethics. Failure to meet the terms and conditions could result
in fines, penalties, remedial actions, license or charter revocation, or criminal
charges.
• Rulemaking. Regulators issue rules (regulations) through the rulemaking
process to implement statutory mandates.5 Typically, statutory mandates provide
regulators with a policy goal in general terms, and regulations fill in the specifics.
Rules lay out the guidelines for how market participants may or may not act to
comply with the mandate.
• Oversight and supervision. Regulators ensure that their rules are adhered to
through oversight and supervision. This allows regulators to observe market
participants’ behavior and instruct them to modify or cease improper behavior.
Supervision may entail active, ongoing monitoring (as for banks) or investigating
complaints and allegations ex post (as is common in securities markets). In some
cases, such as banking, supervision includes periodic examinations and
inspections, whereas in other cases, regulators rely more heavily on selfreporting. Regulators explain supervisory priorities and points of emphasis by
issuing supervisory letters and guidance.
• Enforcement. Regulators can compel firms to modify their behavior through
enforcement powers. Enforcement powers include the ability to issue fines,
penalties, and cease-and-desist orders; to undertake criminal or civil actions in
court or administrative proceedings or arbitrations; and to revoke licenses and
charters. In some cases, regulators initiate legal action at their own prompting or
in response to consumer or investor complaints. In other cases, regulators
explicitly allow consumers and investors to sue for damages when firms do not
comply with regulations, or they provide legal protection to firms that do comply.
• Resolution. Some regulators have the power to resolve a failing firm by taking
control of the firm and initiating conservatorship (i.e., the regulator runs the firm
on an ongoing basis) or receivership (i.e., the regulator winds the firm down).
Other types of failing financial firms are resolved through bankruptcy, a judicial
process separate from the regulators.
Goals of Regulation
Financial regulation is primarily intended to achieve the following underlying policy outcomes:6
• Market efficiency and integrity. Regulators seek to ensure that markets operate
efficiently and that market participants have confidence in the market’s integrity.
4 One may obtain separate licenses to be a broker or a dealer, but in practice, many obtain both.
5 For more information, see CRS Report R41546, A Brief Overview of Rulemaking and Judicial Review, by Todd
Garvey.
6 Regulators are also tasked with promoting certain social goals, such as community reinvestment or affordable
housing. Because this report focuses on regulation, it will not discuss social goals.
Who Regulates Whom? An Overview of the U.S. Financial Regulatory Framework
Congressional Research Service 4
Liquidity, low costs, the presence of many buyers and sellers, the availability of
information, and a lack of excessive volatility are examples of the characteristics
of an efficient market. Regulation can also improve market efficiency by
addressing market failures, such as principal-agent problems,7
asymmetric
information,8
and moral hazard.9 Regulators contribute to market integrity by
ensuring that activities are transparent, contracts can be enforced, and the “rules
of the game” they set are enforced. Integrity generally leads to greater efficiency.
• Consumer and investor protection. Regulators seek to ensure that consumers or
investors do not suffer from fraud, discrimination, manipulation, and theft.
Regulators try to prevent exploitative or abusive practices intended to take
advantage of unwitting consumers or investors. In some cases, protection is
limited to enabling consumers and investors to understand the inherent risks
when they enter into a transaction. In other cases, protection is based on the
principle of suitability—efforts to ensure that more risky products or product
features are accessible only to financially sophisticated or secure consumers or
investors.
• Capital formation and access to credit. Regulators seek to ensure that firms
and consumers are able to access credit and capital to meet their needs such that
credit and economic activity can grow at a healthy rate. Regulators try to ensure
that capital and credit are available to all worthy borrowers, regardless of
personal characteristics, such as race, gender, and location. Examples are fair
lending laws and fair housing goals.
• Illicit activity prevention. Regulators seek to ensure that the financial system
cannot be used to support criminal and terrorist activity. Examples are policies to
prevent money laundering, tax evasion, terrorism financing, and the
contravention of financial sanctions.
• Taxpayer protection. Regulators seek to ensure that losses or failures in
financial markets do not result in federal government payouts or the assumption
of liabilities that are ultimately borne by taxpayers. Only certain types of
financial activity are explicitly backed by the federal government or by regulatorrun insurance schemes that are backed by the federal government, such as the
Deposit Insurance Fund (DIF) run by the FDIC. Such schemes are self-financed
by the insured firms through premium payments unless the losses exceed the
insurance fund, in which case taxpayer money is used temporarily or
permanently to fill the gap. In the case of a financial crisis, the government may
decide that the “least bad” option is to provide funds in ways not explicitly
promised or previously contemplated to restore stability. “Bailouts” of large
failing firms in 2008 are the most well-known examples. In this sense, there may
be implicit taxpayer backing of parts or all of the financial system.
• Financial stability. Financial regulation seeks to maintain financial stability
through preventive and palliative measures that mitigate systemic risk. At times,
financial markets stop functioning well—markets freeze, participants panic,
7 For example, financial agents may have incentives to make decisions that are not in the best interests of their clients,
and clients may not be able to adequately monitor their behavior.
8 For example, firms issuing securities know more about their financial prospects than do investors purchasing those
securities, which can result in a “lemons” problem in which low-quality firms drive high-quality firms out of the
marketplace.
9 For example, individuals may act more imprudently if they are insured against a risk.
Who Regulates Whom? An Overview of the U.S. Financial Regulatory Framework
Congressional Research Service 5
credit becomes unavailable, and multiple firms fail. Financial instability can be
localized (to a specific market or activity) or more general. Sometimes instability
can be contained and quelled through market actions or policy intervention; at
other times, instability metastasizes and does broader damage to the real
economy. The most recent example of the latter was the financial crisis of 2007-
2009. Traditionally, financial stability concerns have centered on banking, but the
recent crisis illustrates the potential for systemic risk to arise in other parts of the
financial system as well.
These regulatory goals are sometimes complementary but at other times conflict with each other.
For example, without an adequate level of consumer and investor protections, fewer individuals
may be willing to participate in financial markets, and efficiency and capital formation could
suffer. But, at some point, too many consumer and investor safeguards and protections could
make credit and capital prohibitively expensive, reducing market efficiency and capital formation.
Regulation generally aims to seek a middle ground between these two extremes in which
regulatory burden is as small as possible and regulatory benefits are as large as possible. Because
some firms can cope with regulatory burden better than others can, sometimes regulation is
tailored so that smaller firms, for example, are exempted from a regulation or face a streamlined
version of a regulation. As a result, when taking any action, regulators balance the tradeoffs
between their various goals. | This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. Do not use bullet points. Limit your response to 100 words.
Regulatory Powers
Regulators implement policy using their powers, which vary by agency. Powers, which can also
be thought of as tools, can be grouped into a few broad categories:
• Licensing, chartering, or registration. A starting point for understanding the
regulatory system is that most activities cannot be undertaken unless a firm,
individual, or market has received the proper credentials from the appropriate
state or federal regulator. Each type of charter, license, or registration granted by
the respective regulator governs the sets of financial activities that the holder is
permitted to engage in. For example, a firm cannot accept federally insured
3 U.S. Government Accountability Office (GAO), Financial Regulation, GAO-16-175, February 2016,
https://www.gao.gov/assets/680/675400.pdf.
Who Regulates Whom? An Overview of the U.S. Financial Regulatory Framework
Congressional Research Service 3
deposits unless it is chartered as a bank, thrift, or credit union by a depository
institution regulator. Likewise, an individual generally cannot buy and sell
securities to others unless licensed as a broker-dealer.4 To be granted a license,
charter, or registration, the recipient must accept the terms and conditions that
accompany it. Depending on the type, those conditions could include regulatory
oversight, training requirements, and a requirement to act according to a set of
standards or code of ethics. Failure to meet the terms and conditions could result
in fines, penalties, remedial actions, license or charter revocation, or criminal
charges.
• Rulemaking. Regulators issue rules (regulations) through the rulemaking
process to implement statutory mandates.5 Typically, statutory mandates provide
regulators with a policy goal in general terms, and regulations fill in the specifics.
Rules lay out the guidelines for how market participants may or may not act to
comply with the mandate.
• Oversight and supervision. Regulators ensure that their rules are adhered to
through oversight and supervision. This allows regulators to observe market
participants’ behavior and instruct them to modify or cease improper behavior.
Supervision may entail active, ongoing monitoring (as for banks) or investigating
complaints and allegations ex post (as is common in securities markets). In some
cases, such as banking, supervision includes periodic examinations and
inspections, whereas in other cases, regulators rely more heavily on selfreporting. Regulators explain supervisory priorities and points of emphasis by
issuing supervisory letters and guidance.
• Enforcement. Regulators can compel firms to modify their behavior through
enforcement powers. Enforcement powers include the ability to issue fines,
penalties, and cease-and-desist orders; to undertake criminal or civil actions in
court or administrative proceedings or arbitrations; and to revoke licenses and
charters. In some cases, regulators initiate legal action at their own prompting or
in response to consumer or investor complaints. In other cases, regulators
explicitly allow consumers and investors to sue for damages when firms do not
comply with regulations, or they provide legal protection to firms that do comply.
• Resolution. Some regulators have the power to resolve a failing firm by taking
control of the firm and initiating conservatorship (i.e., the regulator runs the firm
on an ongoing basis) or receivership (i.e., the regulator winds the firm down).
Other types of failing financial firms are resolved through bankruptcy, a judicial
process separate from the regulators.
Goals of Regulation
Financial regulation is primarily intended to achieve the following underlying policy outcomes:6
• Market efficiency and integrity. Regulators seek to ensure that markets operate
efficiently and that market participants have confidence in the market’s integrity.
4 One may obtain separate licenses to be a broker or a dealer, but in practice, many obtain both.
5 For more information, see CRS Report R41546, A Brief Overview of Rulemaking and Judicial Review, by Todd
Garvey.
6 Regulators are also tasked with promoting certain social goals, such as community reinvestment or affordable
housing. Because this report focuses on regulation, it will not discuss social goals.
Who Regulates Whom? An Overview of the U.S. Financial Regulatory Framework
Congressional Research Service 4
Liquidity, low costs, the presence of many buyers and sellers, the availability of
information, and a lack of excessive volatility are examples of the characteristics
of an efficient market. Regulation can also improve market efficiency by
addressing market failures, such as principal-agent problems,7
asymmetric
information,8
and moral hazard.9 Regulators contribute to market integrity by
ensuring that activities are transparent, contracts can be enforced, and the “rules
of the game” they set are enforced. Integrity generally leads to greater efficiency.
• Consumer and investor protection. Regulators seek to ensure that consumers or
investors do not suffer from fraud, discrimination, manipulation, and theft.
Regulators try to prevent exploitative or abusive practices intended to take
advantage of unwitting consumers or investors. In some cases, protection is
limited to enabling consumers and investors to understand the inherent risks
when they enter into a transaction. In other cases, protection is based on the
principle of suitability—efforts to ensure that more risky products or product
features are accessible only to financially sophisticated or secure consumers or
investors.
• Capital formation and access to credit. Regulators seek to ensure that firms
and consumers are able to access credit and capital to meet their needs such that
credit and economic activity can grow at a healthy rate. Regulators try to ensure
that capital and credit are available to all worthy borrowers, regardless of
personal characteristics, such as race, gender, and location. Examples are fair
lending laws and fair housing goals.
• Illicit activity prevention. Regulators seek to ensure that the financial system
cannot be used to support criminal and terrorist activity. Examples are policies to
prevent money laundering, tax evasion, terrorism financing, and the
contravention of financial sanctions.
• Taxpayer protection. Regulators seek to ensure that losses or failures in
financial markets do not result in federal government payouts or the assumption
of liabilities that are ultimately borne by taxpayers. Only certain types of
financial activity are explicitly backed by the federal government or by regulatorrun insurance schemes that are backed by the federal government, such as the
Deposit Insurance Fund (DIF) run by the FDIC. Such schemes are self-financed
by the insured firms through premium payments unless the losses exceed the
insurance fund, in which case taxpayer money is used temporarily or
permanently to fill the gap. In the case of a financial crisis, the government may
decide that the “least bad” option is to provide funds in ways not explicitly
promised or previously contemplated to restore stability. “Bailouts” of large
failing firms in 2008 are the most well-known examples. In this sense, there may
be implicit taxpayer backing of parts or all of the financial system.
• Financial stability. Financial regulation seeks to maintain financial stability
through preventive and palliative measures that mitigate systemic risk. At times,
financial markets stop functioning well—markets freeze, participants panic,
7 For example, financial agents may have incentives to make decisions that are not in the best interests of their clients,
and clients may not be able to adequately monitor their behavior.
8 For example, firms issuing securities know more about their financial prospects than do investors purchasing those
securities, which can result in a “lemons” problem in which low-quality firms drive high-quality firms out of the
marketplace.
9 For example, individuals may act more imprudently if they are insured against a risk.
Who Regulates Whom? An Overview of the U.S. Financial Regulatory Framework
Congressional Research Service 5
credit becomes unavailable, and multiple firms fail. Financial instability can be
localized (to a specific market or activity) or more general. Sometimes instability
can be contained and quelled through market actions or policy intervention; at
other times, instability metastasizes and does broader damage to the real
economy. The most recent example of the latter was the financial crisis of 2007-
2009. Traditionally, financial stability concerns have centered on banking, but the
recent crisis illustrates the potential for systemic risk to arise in other parts of the
financial system as well.
These regulatory goals are sometimes complementary but at other times conflict with each other.
For example, without an adequate level of consumer and investor protections, fewer individuals
may be willing to participate in financial markets, and efficiency and capital formation could
suffer. But, at some point, too many consumer and investor safeguards and protections could
make credit and capital prohibitively expensive, reducing market efficiency and capital formation.
Regulation generally aims to seek a middle ground between these two extremes in which
regulatory burden is as small as possible and regulatory benefits are as large as possible. Because
some firms can cope with regulatory burden better than others can, sometimes regulation is
tailored so that smaller firms, for example, are exempted from a regulation or face a streamlined
version of a regulation. As a result, when taking any action, regulators balance the tradeoffs
between their various goals.
How do regulators' powers achieve their goals? |
Only use the information provided in the context block to answer the question. When appropriate, provide the answer in a bulleted list. Keep each bullet point to one to two sentences. | What types of things can influence a person's decisions about how to save for retirement? | As discussed earlier in this report, household decisionmaking related to retirement has become more important over time. The shift from DB to DC retirement plans requires families to assume more responsibility for managing their retirement and making decisions about retirement account contributions and investments, as well as making decisions about how to draw down these funds in retirement. For this reason, understanding household decisionmaking in retirement planning is important, particularly when considering retirement savings policy issues and the impact of different policy options on retirement security.
The life-cycle model is a prevalent economic hypothesis that assumes households usually want to keep consumption levels stable over time.57 For example, severely reducing consumption one month may be more painful for people than the pleasure of a much higher household consumption level in another month. Therefore, people save and invest during their careers in order to afford a stable income across their lives, including in retirement. This model suggests that wealth should increase as people age, which generally fits household financial data in the United States.58 In this theory, households adjust their savings rate during their working years rationally, based on interest rates, investment returns, life expectancy, Social Security or pension benefits, and other relevant factors. Evidence exists that some households adjust their retirement planning based on these types of factors.59 However, in the United States, income and consumption move together more closely than the life-cycle model would predict, suggesting some households may not save enough for their retirement needs or other lower-income periods.
Mainstream economic theory asserts that competitive free markets generally lead to efficient distributions of goods and services to maximize value for society.61 If certain conditions hold, policy interventions cannot improve on the financial decisions that consumers make based on their unique situations and preferences. For this reason, some policymakers are hesitant to disrupt free markets, based on the theory that prices determined by market forces lead to efficient outcomes without intervention. However, in these theoretical frameworks, a free market may become inefficient due to departures from standard economic assumptions, which includes assuming that consumers and firms act rationally with perfect information. When these assumptions do not hold, it may cause a reduction in economic efficiency and consumer welfare. In these cases, government policy can potentially bring the market to a more efficient outcome, maximizing social welfare. Yet, policymakers often find it challenging to determine whether a policy intervention will help or harm a particular market to reach its efficient outcome.
The following section discusses behavioral biases, which are a specific departure from the rational decisionmaking condition associated with theoretical economic efficiency. This departure is particularly important for understanding people’s decisionmaking in saving for retirement and investment markets. When people act with predictable biases, markets may become less efficient, and government policy—such as consumer disclosures or other plan design requirements—may be appropriate. However, these policies may also lead to unintended outcomes, which should be taken into account.
Behavioral research suggests that people tend to have biases in rather predictable patterns.62 This research suggests that the human brain has evolved to quickly make judgments in bounded, rational ways, using heuristics—or mental shortcuts—to make decisions. These heuristics generally help people make appropriate decisions quickly and easily, but they can sometimes result in choices that make the decisionmaker worse off financially. For example, the number, order, and structure of options, as well as the process around the choice, can change decisions for many people. A few of these biases tend to be particularly important for understanding retirement planning decisionmaking:
Choice Architecture. Research suggests that how financial decisions are framed can affect consumer decisionmaking. Framing can affect decisions in many ways.
• Anchoring. People can be influenced, or anchored, by an initial number, even if it is unrelated to their next choice.64 In one illustration of this concept, researchers had subjects spin a wheel of fortune with numbers between 0 and 100, then asked them the percentage of African countries in the United Nations. The random number generated in the first stage subconsciously affected subjects’ guesses in the second stage, even though they were not related. Therefore, without the anchor, people’s estimates likely would have been different. In the retirement savings context, the automatic contribution rate in 401(k)s and the percent of salary at which employers provide maximum matches may be anchors that influence how much a person decides to put toward retirement savings.
• Defaults. People can also be influenced by defaults established in how a decision is framed. 66 For example, employees are more likely to be enrolled in a 401(k) plan if an employer defaults them into it than if they actively need to make a choice to participate.
• Choice Overload. When making decisions, people often find it difficult to navigate complexity, such as many choices to choose from or items to consider. In the retirement savings context, this means that more investment fund options in retirement savings plans can sometimes lead to procrastination or failure to make a decision. Choice overload can also lead to poor decisionmaking, as some research suggests that fewer choices in retirement savings plans might lead to better retirement investment decisions
• Asset Allocation and Diversification. People tend to naively make diversification choices when making allocation decisions. For example, in the retirement context, when making decisions about how much to invest in a collection of funds, some people choose to spread their investments evenly across available funds (whether financially appropriate for their situation or not).
Biases Toward the Future. Research suggests that common cognitive biases towards the future can also affect consumer decisionmaking.
Present Bias. When people tend to put more value on having something now, rather than in the future—even when there is a large benefit for waiting—this behavior is called present bias. For example, in the retirement context, people tend to have a preference for lump sums over annuities, independent of risk considerations. Research suggests that people with more present bias tend to save less for retirement when controlling for other factors.
Self-Control. Even when people decide they should do something, such as saving for the future or choosing a retirement plan, self-control and procrastination may prevent them from following their intentions. These human biases might lead consumers to make financial decisions that are not optimal, such as undersaving.
Although consumers might not be aware of these biases when making financial decisions, firms may take advantage of them to attract consumers. For example, choice architecture biases might influence how marketing materials are developed, emphasizing certain terms—such as high past investment return rate—to make a financial product seem more desirable to consumers. In addition, product features may be developed to take advantage of people’s present bias or selfcontrol mistakes. Less knowledgeable retirement savers’ decisionmaking might be more sensitive to choice architecture biases. Biases can also be used to encourage people to save more for retirement and make better retirement decisions. For example, some research suggests that choice architecture environments can make retirement more salient (e.g., annual consumer disclosures that project future retirement income may lead to more retirement savings). Moreover, how saving and investment options are framed may help some people make better retirement decisions. For example, some research suggests that preference checklists, which list factors—such as perceived health, life expectancy, and risk of outliving one’s resources—that people should consider when making a retirement decision, may improve retirement decisionmaking. Although these techniques can be used to encourage socially beneficial goals, such as planning and saving more for retirement, changing the choice environment can also sometimes have perverse impacts. For example, defaulting people at a fixed savings rate can increase participation in retirement plans on average but may discourage some people from making an active decision when they start a new job to increase the contribution rate from the default to a higher level. For these people, the lower contribution rate may lead to less retirement savings over time. Likewise, defaulting people into life-cycle retirement investment plans may lead to more appropriate long-term investment decisions on average, but the investment default also may encourage fewer people to make active decisions or put them in a plan that may conflict with other savings vehicles. Moreover, although defaulting people into 401(k)s can increase the number of people who save for retirement, it may also lead to increased consumer debt without large impacts on household net worth over time.
| As discussed earlier in this report, household decisionmaking related to retirement has become more important over time. The shift from DB to DC retirement plans requires families to assume more responsibility for managing their retirement and making decisions about retirement account contributions and investments, as well as making decisions about how to draw down these funds in retirement. For this reason, understanding household decisionmaking in retirement planning is important, particularly when considering retirement savings policy issues and the impact of different policy options on retirement security.
The life-cycle model is a prevalent economic hypothesis that assumes households usually want to keep consumption levels stable over time.57 For example, severely reducing consumption one month may be more painful for people than the pleasure of a much higher household consumption level in another month. Therefore, people save and invest during their careers in order to afford a stable income across their lives, including in retirement. This model suggests that wealth should increase as people age, which generally fits household financial data in the United States.58 In this theory, households adjust their savings rate during their working years rationally, based on interest rates, investment returns, life expectancy, Social Security or pension benefits, and other relevant factors. Evidence exists that some households adjust their retirement planning based on these types of factors.59 However, in the United States, income and consumption move together more closely than the life-cycle model would predict, suggesting some households may not save enough for their retirement needs or other lower-income periods.
Mainstream economic theory asserts that competitive free markets generally lead to efficient distributions of goods and services to maximize value for society.61 If certain conditions hold, policy interventions cannot improve on the financial decisions that consumers make based on their unique situations and preferences. For this reason, some policymakers are hesitant to disrupt free markets, based on the theory that prices determined by market forces lead to efficient outcomes without intervention. However, in these theoretical frameworks, a free market may become inefficient due to departures from standard economic assumptions, which includes assuming that consumers and firms act rationally with perfect information. When these assumptions do not hold, it may cause a reduction in economic efficiency and consumer welfare. In these cases, government policy can potentially bring the market to a more efficient outcome, maximizing social welfare. Yet, policymakers often find it challenging to determine whether a policy intervention will help or harm a particular market to reach its efficient outcome.
The following section discusses behavioral biases, which are a specific departure from the rational decisionmaking condition associated with theoretical economic efficiency. This departure is particularly important for understanding people’s decisionmaking in saving for retirement and investment markets. When people act with predictable biases, markets may become less efficient, and government policy—such as consumer disclosures or other plan design requirements—may be appropriate. However, these policies may also lead to unintended outcomes, which should be taken into account.
Behavioral research suggests that people tend to have biases in rather predictable patterns.62 This research suggests that the human brain has evolved to quickly make judgments in bounded, rational ways, using heuristics—or mental shortcuts—to make decisions. These heuristics generally help people make appropriate decisions quickly and easily, but they can sometimes result in choices that make the decisionmaker worse off financially. For example, the number, order, and structure of options, as well as the process around the choice, can change decisions for many people. A few of these biases tend to be particularly important for understanding retirement planning decisionmaking:
Choice Architecture. Research suggests that how financial decisions are framed can affect consumer decisionmaking. Framing can affect decisions in many ways.
• Anchoring. People can be influenced, or anchored, by an initial number, even if it is unrelated to their next choice.64 In one illustration of this concept, researchers had subjects spin a wheel of fortune with numbers between 0 and 100, then asked them the percentage of African countries in the United Nations. The random number generated in the first stage subconsciously affected subjects’ guesses in the second stage, even though they were not related. Therefore, without the anchor, people’s estimates likely would have been different. In the retirement savings context, the automatic contribution rate in 401(k)s and the percent of salary at which employers provide maximum matches may be anchors that influence how much a person decides to put toward retirement savings.
• Defaults. People can also be influenced by defaults established in how a decision is framed. 66 For example, employees are more likely to be enrolled in a 401(k) plan if an employer defaults them into it than if they actively need to make a choice to participate.
• Choice Overload. When making decisions, people often find it difficult to navigate complexity, such as many choices to choose from or items to consider. In the retirement savings context, this means that more investment fund options in retirement savings plans can sometimes lead to procrastination or failure to make a decision. Choice overload can also lead to poor decisionmaking, as some research suggests that fewer choices in retirement savings plans might lead to better retirement investment decisions
• Asset Allocation and Diversification. People tend to naively make diversification choices when making allocation decisions. For example, in the retirement context, when making decisions about how much to invest in a collection of funds, some people choose to spread their investments evenly across available funds (whether financially appropriate for their situation or not).
Biases Toward the Future. Research suggests that common cognitive biases towards the future can also affect consumer decisionmaking.
Present Bias. When people tend to put more value on having something now, rather than in the future—even when there is a large benefit for waiting—this behavior is called present bias. For example, in the retirement context, people tend to have a preference for lump sums over annuities, independent of risk considerations. Research suggests that people with more present bias tend to save less for retirement when controlling for other factors.
Self-Control. Even when people decide they should do something, such as saving for the future or choosing a retirement plan, self-control and procrastination may prevent them from following their intentions. These human biases might lead consumers to make financial decisions that are not optimal, such as undersaving.
Although consumers might not be aware of these biases when making financial decisions, firms may take advantage of them to attract consumers. For example, choice architecture biases might influence how marketing materials are developed, emphasizing certain terms—such as high past investment return rate—to make a financial product seem more desirable to consumers. In addition, product features may be developed to take advantage of people’s present bias or selfcontrol mistakes. Less knowledgeable retirement savers’ decisionmaking might be more sensitive to choice architecture biases. Biases can also be used to encourage people to save more for retirement and make better retirement decisions. For example, some research suggests that choice architecture environments can make retirement more salient (e.g., annual consumer disclosures that project future retirement income may lead to more retirement savings). Moreover, how saving and investment options are framed may help some people make better retirement decisions. For example, some research suggests that preference checklists, which list factors—such as perceived health, life expectancy, and risk of outliving one’s resources—that people should consider when making a retirement decision, may improve retirement decisionmaking. Although these techniques can be used to encourage socially beneficial goals, such as planning and saving more for retirement, changing the choice environment can also sometimes have perverse impacts. For example, defaulting people at a fixed savings rate can increase participation in retirement plans on average but may discourage some people from making an active decision when they start a new job to increase the contribution rate from the default to a higher level. For these people, the lower contribution rate may lead to less retirement savings over time. Likewise, defaulting people into life-cycle retirement investment plans may lead to more appropriate long-term investment decisions on average, but the investment default also may encourage fewer people to make active decisions or put them in a plan that may conflict with other savings vehicles. Moreover, although defaulting people into 401(k)s can increase the number of people who save for retirement, it may also lead to increased consumer debt without large impacts on household net worth over time.
Only use the information provided in the context block to answer the question. When appropriate, provide the answer in a bulleted list. Keep each bullet point to one to two sentences.
What types of things can influence a person's decisions about how to save for retirement?
|
The information provided in the prompt contains all the knowledge necessary to answer the questions in the prompt. Do not use any knowledge other than what is contained within the full prompt in your response. If you decide it is not possible to answer the question from the context alone, say "I could not find this information in the provided text" Format the output as a numbered list, and split the numbers as you see fit. | What are potential solutions given to address the limitations in each of the 6 areas of continuing research? | Known limitations of LLM-based interfaces like Gemini Gemini is just one part of our continuing effort to develop LLMs responsibly. Throughout the course of this work, we have discovered and discussed several limitations associated with LLMs. Here, we focus on six areas of continuing research: Accuracy: Gemini’s responses might be inaccurate, especially when it’s asked about complex or factual topics; Bias: Gemini’s responses might reflect biases present in its training data; Multiple Perspectives: Gemini’s responses might fail to show a range of views; Persona: Gemini’s responses might incorrectly suggest it has personal opinions or feelings, False positives and false negatives: Gemini might not respond to some appropriate prompts and provide inappropriate responses to others, and Vulnerability to adversarial prompting: users will find ways to stress test Gemini with nonsensical prompts or questions rarely asked in the real world. We continue to explore new approaches and areas for improved performance in each of these areas. 4 An overview of the Gemini appAccuracy Gemini is grounded in Google’s understanding of authoritative information, and is trained to generate responses that are relevant to the context of your prompt and in line with what you’re looking for. But like all LLMs, Gemini can sometimes confidently and convincingly generate responses that contain inaccurate or misleading information. Since LLMs work by predicting the next word or sequences of words, they are not yet fully capable of distinguishing between accurate and inaccurate information on their own. We have seen Gemini present responses that contain or even invent inaccurate information (e.g., misrepresenting how it was trained or suggesting the name of a book that doesn’t exist). In response we have created features like “double check”, which uses Google Search to find content that helps you assess Gemini’s responses, and gives you links to sources to help you corroborate the information you get from Gemini. Bias Training data, including from publicly available sources, reflects a diversity of perspectives and opinions. We continue to research how to use this data in a way that ensures that an LLM’s response incorporates a wide range of viewpoints, while minimizing inaccurate overgeneralizations and biases. Gaps, biases, and overgeneralizations in training data can be reflected in a model’s outputs as it tries to predict likely responses to a prompt. We see these issues manifest in a number of ways (e.g., responses that reflect only one culture or demographic, reference problematic overgeneralizations, exhibit gender, religious, or ethnic biases, or promote only one point of view). For some topics, there are data voids — in other words, not enough reliable information about a given subject for the LLM to learn about it and then make good predictions — which can result in low-quality or inaccurate responses. We continue to work with domain experts and a diversity of communities to draw on deep expertise outside of Google. Multiple Perspectives For subjective topics, Gemini is designed to provide users with multiple perspectives if the user does not request a specific point of view. For example, if prompted for information on something that cannot be verified by primary source facts or authoritative sources — like a subjective opinion on “best” or “worst” — Gemini should respond in a way that reflects a wide range of viewpoints. But since LLMs like Gemini train on the content publicly available on the internet, they can reflect positive or negative views of specific politicians, celebrities, or other public figures, or even incorporate views on just one side of controversial social or political issues. Gemini should not respond in a way that endorses a particular viewpoint on these topics, and we will use feedback on these types of responses to train Gemini to better address them. Persona Gemini might at times generate responses that seem to suggest it has opinions or emotions, like love or sadness, since it has trained on language that people use to reflect the human experience. We have developed a set of guidelines around how Gemini might represent itself (i.e., its persona) and continue to finetune the model to provide objective responses. 5 An overview of the Gemini appFalse positives / negatives We’ve put in place a set of policy guidelines to help train Gemini and avoid generating problematic responses. Gemini can sometimes misinterpret these guidelines, producing “false positives” and “false negatives.” In a “false positive,” Gemini might not provide a response to a reasonable prompt, misinterpreting the prompt as inappropriate; and in a “false negative,” Gemini might generate an inappropriate response, despite the guidelines in place. Sometimes, the occurrence of false positives or false negatives may give the impression that Gemini is biased: For example, a false positive might cause Gemini to not respond to a question about one side of an issue, while it will respond to the same question about the other side. We continue to tune these models to better understand and categorize inputs and outputs as language, events and society rapidly evolve. Vulnerability to adversarial prompting We expect users to test the limits of what Gemini can do and attempt to break its protections, including trying to get it to divulge its training protocols or other information, or try to get around its safety mechanisms. We have tested and continue to test Gemini rigorously, but we know users will find unique, complex ways to stress-test it further. This is an important part of refining Gemini and we look forward to learning the new prompts users come up with. Indeed, since Gemini launched in 2023, we’ve seen users challenge it with prompts that range from the philosophical to the nonsensical – and in some cases, we’ve seen Gemini respond with answers that are equally nonsensical or not aligned with our stated approach. Figuring out methods to help Gemini respond to these sorts of prompts is an on-going challenge and we have continued to expand our internal evaluations and red-teaming to strive toward continued improvement to accuracy, and objectivity and nuance. | What are potential solutions given to address the limitations in each of the 6 areas of continuing research?
The information provided in the prompt contains all the knowledge necessary to answer the questions in the prompt. Do not use any knowledge other than what is contained within the full prompt in your response. If you decide it is not possible to answer the question from the context alone, say "I could not find this information in the provided text" Format the output as a numbered list, and split the numbers as you see fit.
Known limitations of LLM-based interfaces like Gemini Gemini is just one part of our continuing effort to develop LLMs responsibly. Throughout the course of this work, we have discovered and discussed several limitations associated with LLMs. Here, we focus on six areas of continuing research: Accuracy: Gemini’s responses might be inaccurate, especially when it’s asked about complex or factual topics; Bias: Gemini’s responses might reflect biases present in its training data; Multiple Perspectives: Gemini’s responses might fail to show a range of views; Persona: Gemini’s responses might incorrectly suggest it has personal opinions or feelings, False positives and false negatives: Gemini might not respond to some appropriate prompts and provide inappropriate responses to others, and Vulnerability to adversarial prompting: users will find ways to stress test Gemini with nonsensical prompts or questions rarely asked in the real world. We continue to explore new approaches and areas for improved performance in each of these areas. 4 An overview of the Gemini appAccuracy Gemini is grounded in Google’s understanding of authoritative information, and is trained to generate responses that are relevant to the context of your prompt and in line with what you’re looking for. But like all LLMs, Gemini can sometimes confidently and convincingly generate responses that contain inaccurate or misleading information. Since LLMs work by predicting the next word or sequences of words, they are not yet fully capable of distinguishing between accurate and inaccurate information on their own. We have seen Gemini present responses that contain or even invent inaccurate information (e.g., misrepresenting how it was trained or suggesting the name of a book that doesn’t exist). In response we have created features like “double check”, which uses Google Search to find content that helps you assess Gemini’s responses, and gives you links to sources to help you corroborate the information you get from Gemini. Bias Training data, including from publicly available sources, reflects a diversity of perspectives and opinions. We continue to research how to use this data in a way that ensures that an LLM’s response incorporates a wide range of viewpoints, while minimizing inaccurate overgeneralizations and biases. Gaps, biases, and overgeneralizations in training data can be reflected in a model’s outputs as it tries to predict likely responses to a prompt. We see these issues manifest in a number of ways (e.g., responses that reflect only one culture or demographic, reference problematic overgeneralizations, exhibit gender, religious, or ethnic biases, or promote only one point of view). For some topics, there are data voids — in other words, not enough reliable information about a given subject for the LLM to learn about it and then make good predictions — which can result in low-quality or inaccurate responses. We continue to work with domain experts and a diversity of communities to draw on deep expertise outside of Google. Multiple Perspectives For subjective topics, Gemini is designed to provide users with multiple perspectives if the user does not request a specific point of view. For example, if prompted for information on something that cannot be verified by primary source facts or authoritative sources — like a subjective opinion on “best” or “worst” — Gemini should respond in a way that reflects a wide range of viewpoints. But since LLMs like Gemini train on the content publicly available on the internet, they can reflect positive or negative views of specific politicians, celebrities, or other public figures, or even incorporate views on just one side of controversial social or political issues. Gemini should not respond in a way that endorses a particular viewpoint on these topics, and we will use feedback on these types of responses to train Gemini to better address them. Persona Gemini might at times generate responses that seem to suggest it has opinions or emotions, like love or sadness, since it has trained on language that people use to reflect the human experience. We have developed a set of guidelines around how Gemini might represent itself (i.e., its persona) and continue to finetune the model to provide objective responses. 5 An overview of the Gemini appFalse positives / negatives We’ve put in place a set of policy guidelines to help train Gemini and avoid generating problematic responses. Gemini can sometimes misinterpret these guidelines, producing “false positives” and “false negatives.” In a “false positive,” Gemini might not provide a response to a reasonable prompt, misinterpreting the prompt as inappropriate; and in a “false negative,” Gemini might generate an inappropriate response, despite the guidelines in place. Sometimes, the occurrence of false positives or false negatives may give the impression that Gemini is biased: For example, a false positive might cause Gemini to not respond to a question about one side of an issue, while it will respond to the same question about the other side. We continue to tune these models to better understand and categorize inputs and outputs as language, events and society rapidly evolve. Vulnerability to adversarial prompting We expect users to test the limits of what Gemini can do and attempt to break its protections, including trying to get it to divulge its training protocols or other information, or try to get around its safety mechanisms. We have tested and continue to test Gemini rigorously, but we know users will find unique, complex ways to stress-test it further. This is an important part of refining Gemini and we look forward to learning the new prompts users come up with. Indeed, since Gemini launched in 2023, we’ve seen users challenge it with prompts that range from the philosophical to the nonsensical – and in some cases, we’ve seen Gemini respond with answers that are equally nonsensical or not aligned with our stated approach. Figuring out methods to help Gemini respond to these sorts of prompts is an on-going challenge and we have continued to expand our internal evaluations and red-teaming to strive toward continued improvement to accuracy, and objectivity and nuance. |
Please limit your knowledge to the document. Avoid generalizations and ensure accuracy by directly referencing the document's arguments and examples. | How to fix a phone that won't turn on. | How to fix a phone that won't turn on
*
*
*
*
*
Our phones help us stay connected, so when they stop working (or worse, won't turn on), there's a lot we can't doÑfrom texting and calling to surfing the web and watching videos on apps like TikTok¨ and Instagram¨.
Asurion Experts come acrossÊphone repairÊissues like this every day. They help millions of customers get the most out of their favorite tech, whether it's aÊbroken Xbox Series XªÊor aÊswollen laptop battery. If your phone hasn't been turning on like it should, check out their tips for getting your device working again (so you can get back to enjoying that video of Adam Sandler leaving IHOP).
Why won't my phone turn on?
There are several possible reasons why your phone won't turn on, from battery failure to software issues. But most times you can narrow it down to a few common problems, including:
* A drained battery.ÊYour phone may be unresponsive because the battery is dead. Find outÊways to minimize battery drainÊon an Android device.Ê
* Water or physical damage.ÊDropped your phone in the sink recently? Even a small amount ofÊliquidÊcan do major damage if it gets inside your device.ÊDropping your phoneÊon a hard surface can also do some internal damage, even if there aren't noticeable cracks or breaks.
* A software glitch.ÊWhen an iPhone¨ won't turn on or theÊscreen is black, it could be a software issue. The same goes for Androidª devices. Certain apps and programs occasionally prevent your phone from working properly, especially if it crashes during a software or system update.
Device won't power on? We can help
We'll do a free device diagnostic to find out what's wrongÑvisit your local store or schedule a repair.
Schedule a repair
What to do if your phone won't turn on
If you're having issues with an unresponsive phone, don't panic. There are several ways to get it back up and running without going to extremes (like buying a new device). Here's how to fix a phone that won't turn on, according to our experts:
1. Restart the phone
The problem could be that your phone is frozen. Try restarting it the normal way, and if that doesn't work, you may have to force restart your device.
For an iPhone 11 that won't turn on, as well as other new iPhone models, follow these steps:
1. Press and quickly release the Volume Up button, then do the same with the Volume Down button.
2. Press and hold the Power button until your device restarts.
Need to restart a Google Pixelª that won't turn on or another Android phone that isn't working? Just perform a power cycle. The steps are simple:
1. Press and hold the Power button for about 30 seconds. For some models, like a Samsung¨ phone, you may also have to hold the Volume Down button at the same time, but it should only take a few seconds.
2. Wait until your screen turns on to release the buttons.
2. Charge the battery
Plugging in your phone for 15Ð25 minutes may do the trick. Connect your device to a charger and give it some juice. If the battery symbol appears on the screen, be sure your phone gains enough power before you try turning it on. Then check out our tips onÊhow to improve yourÊiPhoneÊorÊAndroid battery life.
What if my phone died and won't turn on or charge?
If you've tried charging your phone and it won't turn on, there may be dust and dirt clogging the charging port or a problem with the USB or lightning cable. Check out our guide forÊhow to clean your phone's charging portÊif you need more help.
3. Enable Safe Mode
Using Safe Mode for your Android will disable third-party apps that may be causing an issue for your device andÑif all goes wellÑallow it to turn on. For iPhone users, skip to step four.
How to enter Safe Mode on your Android:
1. Press and hold the Power button.
2. When your phone begins to boot up, press and hold the Volume down button until you see ÒSafe ModeÓ appear on your screen.
3. To exit Safe Mode, restart your device.
4. Check for damage
Sometimes cracks, breaks, and corrosion on your phone aren't visible right away. Try shining a light on the screen and removing your phone case to check for any physical damage. You can also try calling your device to see if it vibrates or rings. Water damage? Here areÊ8 ways to dry your phoneÊfast.
5. Perform a factory reset
When your Android or iPhone won't power on, restoring your device to its factory settings may be your only option. But this should be a last resort, after you've tried everything else, because it erases nearly all the data on your device.
If you can'tÊperform a factory reset on your iPhoneÊbecause of a frozen or black screen, connect it to a computer and use a program like Finder¨ or iTunes¨ to enter Recovery Mode. ForÊSamsung Galaxy¨ S8, newer models, and other devices, you canÊfactory reset your Android phoneÊby following our guide.
If you've tried these steps and still need a little help, we're right around the corner. Schedule a repair at theÊnearest uBreakiFix¨ by Asurion storeÊand our certified experts can get your device back up and running as soon as the same day.
| Please limit your knowledge to the document. Avoid generalizations and ensure accuracy by directly referencing the document's arguments and examples.
How to fix a phone that won't turn on.
How to fix a phone that won't turn on
*
*
*
*
*
Our phones help us stay connected, so when they stop working (or worse, won't turn on), there's a lot we can't doÑfrom texting and calling to surfing the web and watching videos on apps like TikTok¨ and Instagram¨.
Asurion Experts come acrossÊphone repairÊissues like this every day. They help millions of customers get the most out of their favorite tech, whether it's aÊbroken Xbox Series XªÊor aÊswollen laptop battery. If your phone hasn't been turning on like it should, check out their tips for getting your device working again (so you can get back to enjoying that video of Adam Sandler leaving IHOP).
Why won't my phone turn on?
There are several possible reasons why your phone won't turn on, from battery failure to software issues. But most times you can narrow it down to a few common problems, including:
* A drained battery.ÊYour phone may be unresponsive because the battery is dead. Find outÊways to minimize battery drainÊon an Android device.Ê
* Water or physical damage.ÊDropped your phone in the sink recently? Even a small amount ofÊliquidÊcan do major damage if it gets inside your device.ÊDropping your phoneÊon a hard surface can also do some internal damage, even if there aren't noticeable cracks or breaks.
* A software glitch.ÊWhen an iPhone¨ won't turn on or theÊscreen is black, it could be a software issue. The same goes for Androidª devices. Certain apps and programs occasionally prevent your phone from working properly, especially if it crashes during a software or system update.
Device won't power on? We can help
We'll do a free device diagnostic to find out what's wrongÑvisit your local store or schedule a repair.
Schedule a repair
What to do if your phone won't turn on
If you're having issues with an unresponsive phone, don't panic. There are several ways to get it back up and running without going to extremes (like buying a new device). Here's how to fix a phone that won't turn on, according to our experts:
1. Restart the phone
The problem could be that your phone is frozen. Try restarting it the normal way, and if that doesn't work, you may have to force restart your device.
For an iPhone 11 that won't turn on, as well as other new iPhone models, follow these steps:
1. Press and quickly release the Volume Up button, then do the same with the Volume Down button.
2. Press and hold the Power button until your device restarts.
Need to restart a Google Pixelª that won't turn on or another Android phone that isn't working? Just perform a power cycle. The steps are simple:
1. Press and hold the Power button for about 30 seconds. For some models, like a Samsung¨ phone, you may also have to hold the Volume Down button at the same time, but it should only take a few seconds.
2. Wait until your screen turns on to release the buttons.
2. Charge the battery
Plugging in your phone for 15Ð25 minutes may do the trick. Connect your device to a charger and give it some juice. If the battery symbol appears on the screen, be sure your phone gains enough power before you try turning it on. Then check out our tips onÊhow to improve yourÊiPhoneÊorÊAndroid battery life.
What if my phone died and won't turn on or charge?
If you've tried charging your phone and it won't turn on, there may be dust and dirt clogging the charging port or a problem with the USB or lightning cable. Check out our guide forÊhow to clean your phone's charging portÊif you need more help.
3. Enable Safe Mode
Using Safe Mode for your Android will disable third-party apps that may be causing an issue for your device andÑif all goes wellÑallow it to turn on. For iPhone users, skip to step four.
How to enter Safe Mode on your Android:
1. Press and hold the Power button.
2. When your phone begins to boot up, press and hold the Volume down button until you see ÒSafe ModeÓ appear on your screen.
3. To exit Safe Mode, restart your device.
4. Check for damage
Sometimes cracks, breaks, and corrosion on your phone aren't visible right away. Try shining a light on the screen and removing your phone case to check for any physical damage. You can also try calling your device to see if it vibrates or rings. Water damage? Here areÊ8 ways to dry your phoneÊfast.
5. Perform a factory reset
When your Android or iPhone won't power on, restoring your device to its factory settings may be your only option. But this should be a last resort, after you've tried everything else, because it erases nearly all the data on your device.
If you can'tÊperform a factory reset on your iPhoneÊbecause of a frozen or black screen, connect it to a computer and use a program like Finder¨ or iTunes¨ to enter Recovery Mode. ForÊSamsung Galaxy¨ S8, newer models, and other devices, you canÊfactory reset your Android phoneÊby following our guide.
If you've tried these steps and still need a little help, we're right around the corner. Schedule a repair at theÊnearest uBreakiFix¨ by Asurion storeÊand our certified experts can get your device back up and running as soon as the same day.
|
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | I am a researcher and want to write a review on the impact of human pathogens on health. I am now focusing on group A streptococcus and have no idea what type of diseases this pathogen can cause. Are you able to give me information so that I can write on the topic? | Streptococcus pyogenes, also known as group A Streptococcus (GAS), is a bacterium commonly present in the throat and on the skin (1, 2). This pathogen is notorious for causing strep throat and impetigo, accounting for approximately 700 million non-invasive infections each year (3–5). However, GAS can also lead to serious invasive diseases, including necrotizing fasciitis and streptococcal toxic shock syndrome, resulting in over 150,000 deaths annually (4). Additionally, GAS is the immunological trigger for acute rheumatic fever and rheumatic heart disease, causing substantial death and disability in many developing countries. Despite rising GAS resistance to certain antibiotic classes, the pathogen has fortunately remained susceptible to penicillin and other β-lactam agents (6).
There is presently no commercially available vaccine to protect against GAS infection (7). GAS presents a challenge for vaccine antigen selection due to the variability in the abundant surface-exposed M protein with over 230 emm types circulating globally (8). The most common emm type, M1, is a major contributor to GAS global epidemiology and is particularly prominent in severe, invasive infections (9). The search for new GAS antibiotic targets and vaccine candidates is hindered by a knowledge gap in fundamental GAS biology, partly because M1-type GAS strains are exceptionally challenging to manipulate genetically (10, 11). In this study, we present a toolbox for GAS genetic engineering, utilizing the hard-to-transform and clinically relevant M1T1-type strain 5448 (NV1) as a model (1, 12). We selected strain 5448 since it is commonly used, and we reckoned that if our approaches work in this strain, they are highly likely to also work in generally easier-to-work-with GAS strains. This toolbox should be generally applicable to GAS and related bacteria, encompassing protocols for recombineering using GoldenGate-assembled linear DNA, oligo-based single guide RNA (sgRNA) cloning, a titratable doxycycline-inducible promoter, and CRISPR interference (CRISPRi) effective both in vitro and in vivo in a murine GAS infection model.
Overall, this work overcomes significant technical challenges of working with GAS, facilitating genetic engineering and targeted gene knockdowns to advance our insights into the physiology and cell biology of this preeminent human bacterial pathogen.
GAS5448, a widely used strain in fundamental research, serves as a clinical representative of the globally distributed M1T1 serotype associated with severe invasive infections. While 5448 has been effectively employed in murine models of GAS infection (16, 17), its genetic manipulation poses challenges, with even the construction of transposon mutant libraries proving highly difficult (10, 11, 18). To enhance GAS 5448 transformation efficiencies while retaining full virulence, we targeted one of the major barriers to transformation—the HsdR restriction subunit of the conserved three-component Type I restriction-modification (RM) system, HsdRSM. Hsd, denoting host specificity of DNA, signifies how these Type I RM systems cleave intracellular (foreign) DNA with improper methylation patterns. Mutations in this system improve transformation efficiency in other GAS strains (19–22), but with potential pleiotropic consequences. For example, while the deletion of the entire hsdRSM system in serotype M28 GAS strain MEW123 boosted transformation efficiency, it concurrently reduced virulence in a murine model of infection (20). A spectinomycin marker-replacement mutant eliminating just the restriction subunit hsdR also increased transformation efficiency but led to partially methylated genomic DNA likely due to polar effects (20). | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
I am a researcher and want to write a review on the impact of human pathogens on health. I am now focusing on group A streptococcus and have no idea what type of diseases this pathogen can cause. Are you able to give me information so that I can write on the topic?
{passage 0}
==========
Streptococcus pyogenes, also known as group A Streptococcus (GAS), is a bacterium commonly present in the throat and on the skin (1, 2). This pathogen is notorious for causing strep throat and impetigo, accounting for approximately 700 million non-invasive infections each year (3–5). However, GAS can also lead to serious invasive diseases, including necrotizing fasciitis and streptococcal toxic shock syndrome, resulting in over 150,000 deaths annually (4). Additionally, GAS is the immunological trigger for acute rheumatic fever and rheumatic heart disease, causing substantial death and disability in many developing countries. Despite rising GAS resistance to certain antibiotic classes, the pathogen has fortunately remained susceptible to penicillin and other β-lactam agents (6).
There is presently no commercially available vaccine to protect against GAS infection (7). GAS presents a challenge for vaccine antigen selection due to the variability in the abundant surface-exposed M protein with over 230 emm types circulating globally (8). The most common emm type, M1, is a major contributor to GAS global epidemiology and is particularly prominent in severe, invasive infections (9). The search for new GAS antibiotic targets and vaccine candidates is hindered by a knowledge gap in fundamental GAS biology, partly because M1-type GAS strains are exceptionally challenging to manipulate genetically (10, 11). In this study, we present a toolbox for GAS genetic engineering, utilizing the hard-to-transform and clinically relevant M1T1-type strain 5448 (NV1) as a model (1, 12). We selected strain 5448 since it is commonly used, and we reckoned that if our approaches work in this strain, they are highly likely to also work in generally easier-to-work-with GAS strains. This toolbox should be generally applicable to GAS and related bacteria, encompassing protocols for recombineering using GoldenGate-assembled linear DNA, oligo-based single guide RNA (sgRNA) cloning, a titratable doxycycline-inducible promoter, and CRISPR interference (CRISPRi) effective both in vitro and in vivo in a murine GAS infection model.
Overall, this work overcomes significant technical challenges of working with GAS, facilitating genetic engineering and targeted gene knockdowns to advance our insights into the physiology and cell biology of this preeminent human bacterial pathogen.
GAS5448, a widely used strain in fundamental research, serves as a clinical representative of the globally distributed M1T1 serotype associated with severe invasive infections. While 5448 has been effectively employed in murine models of GAS infection (16, 17), its genetic manipulation poses challenges, with even the construction of transposon mutant libraries proving highly difficult (10, 11, 18). To enhance GAS 5448 transformation efficiencies while retaining full virulence, we targeted one of the major barriers to transformation—the HsdR restriction subunit of the conserved three-component Type I restriction-modification (RM) system, HsdRSM. Hsd, denoting host specificity of DNA, signifies how these Type I RM systems cleave intracellular (foreign) DNA with improper methylation patterns. Mutations in this system improve transformation efficiency in other GAS strains (19–22), but with potential pleiotropic consequences. For example, while the deletion of the entire hsdRSM system in serotype M28 GAS strain MEW123 boosted transformation efficiency, it concurrently reduced virulence in a murine model of infection (20). A spectinomycin marker-replacement mutant eliminating just the restriction subunit hsdR also increased transformation efficiency but led to partially methylated genomic DNA likely due to polar effects (20).
https://journals.asm.org/doi/full/10.1128/mbio.00840-24?rfr_dat=cr_pub++0pubmed&url_ver=Z39.88-2003&rfr_id=ori%3Arid%3Acrossref.org |
You can only respond to these questions with information from the text below. Answer with 1 bullet point. | How many individuals were threatened by Mcdonald's in relation to the pamphlet produced? | Helen Steel and Dave Morris joined “London Greenpeace” in 1980. The organization was not connected to international Greenpeace; rather it was an independent activist group that campaigned for social change on a broad range of issues. One of the group’s projects was the distribution of a pamphlet that was published in 1986, entitled “What’s Wrong with McDonald’s". McDonald’s hired private detectives to infiltrate the organization, and ultimately threatened to sue the individuals who were distributing the pamphlets.2 In order to avoid being sued for libel, three of the five apologized, and in 1990 promised to stop distributing the pamphlets. But Ms. Steel and Mr. Morris, who have been dubbed the “McLibel 2,” refused.3 No doubt this obstinacy was not expected, as McDonald’s had apparently been successful in the past in stopping criticism and forcing apologies from much more affluent foes, including the BBC.4 McDonald’s U.S. and its U.K. affiliate (“First Plaintiffs” and “Second Plaintiffs” respectively) filed suit against Morris and Steel. The more than two and a half-year trial, the longest in English history, began in June of 1994, after twenty-eight pre-trial hearings.5 In June of 1997, in a 750 page judgment, Justice Rodger Justice Bell found that McDonald’s had been defamed and assessed damages equivalent to $96,000 against the two defendants.6 the company.8 It is not very likely that McDonald’s will ever recover its $96,000, as Mr. Morris is an unemployed former postal worker and Ms. Steel is a part time bartender.7 But the president of McDonald’s U.K. testified that this was not about money—it was about preventing lies being used to try to “‘smash’” the company. The recovery would not come close to compensating McDonald’s for its costs in the law suit, which have been estimated to be about $10 million, including over £ 6,500 per day of trial for their team of top English libel lawyers.9 satisfied,”10 Although a McDonald’s official commented that they were “broadly some have suggested that it was at best a Pyrrhic victory.11 The case became a public relations disaster around the world, thanks in large part to the Internet, which now has a very active anti-McDonald’s website. The site displays the offending pamphlet as well as even more derogatory comments about McDonald’s, including some allegations from other sources that McDonald’s had previously successfully suppressed by threats of law suits. ce.17 When Justice Bell finally released his judgment, it included some rather detrimental conclusions about McDonald’s business practices. Although Justice Bell found in favor of McDonald’s on nearly all of their claims, he did reject a few. He concluded that McDonald’s had contributed to cruelty to animals, used advertising to manipulate children, and paid employees so little as to depress wages in the catering industry in England. These findings were prominently reported in numerous articles describing the judgement. The statements found to be defamatory included assertions in the pamphlet that McDonald’s was destroying rain forests; causing starvation in the Third World; producing litter in cities; causing heart disease, cancer and food poisoning; subjecting employees to “bad” working conditions; exploiting women and minority workers; and covering up the low quality of their food with advertising gimmicks aimed at children.19 Morris and Steel faced formidable obstacles under English law due to a combination of restrictive substantive libel laws, denial of a jury trial, the potential for very high damage awards and legal costs,43 and a lack of legal aid. The exclusion of defamation actions from the otherwise rather generous legal aid system in England44 is based on the fear of frivolous petty suits.45 Reformers have convincingly disputed this rationale;46 but in any event, the McDonald’s case aptly demonstrates the severe disadvantage the rule imposes on defendants of modest means being sued by affluent plaintiffs. England’s strict liability libel law contributes to its reputation as a haven for libel plaintiffs. The $96,000 award given by Justice Bell in McDonald’s was, by these standards, quite modest. But a company with annual earnings of $32 billion does not sue defendants like Morris and Steel for the money. They were suing to stop the criticism and deter future critics, and their track record in squelching criticism by threatened civil suits in England had been quite good until they faced the “McDonald’s 2. | system instruction: [You can only respond to these questions with information from the text below. Answer with 1 bullet point.]
question: [How many individuals were threatened by Mcdonald's in relation to the pamphlet produced?]
context block: [Helen Steel and Dave Morris joined “London Greenpeace” in 1980. The organization was not connected to international Greenpeace; rather it was an independent activist group that campaigned for social change on a broad range of issues. One of the group’s projects was the distribution of a pamphlet that was published in 1986, entitled “What’s Wrong with McDonald’s". McDonald’s hired private detectives to infiltrate the organization, and ultimately threatened to sue the individuals who were distributing the pamphlets.2 In order to avoid being sued for libel, three of the five apologized, and in 1990 promised to stop distributing the pamphlets. But Ms. Steel and Mr. Morris, who have been dubbed the “McLibel 2,” refused.3 No doubt this obstinacy was not expected, as McDonald’s had apparently been successful in the past in stopping criticism and forcing apologies from much more affluent foes, including the BBC.4 McDonald’s U.S. and its U.K. affiliate (“First Plaintiffs” and “Second Plaintiffs” respectively) filed suit against Morris and Steel. The more than two and a half-year trial, the longest in English history, began in June of 1994, after twenty-eight pre-trial hearings.5 In June of 1997, in a 750 page judgment, Justice Rodger Justice Bell found that McDonald’s had been defamed and assessed damages equivalent to $96,000 against the two defendants.6 the company.8 It is not very likely that McDonald’s will ever recover its $96,000, as Mr. Morris is an unemployed former postal worker and Ms. Steel is a part time bartender.7 But the president of McDonald’s U.K. testified that this was not about money—it was about preventing lies being used to try to “‘smash’” the company. The recovery would not come close to compensating McDonald’s for its costs in the law suit, which have been estimated to be about $10 million, including over £ 6,500 per day of trial for their team of top English libel lawyers.9 satisfied,”10 Although a McDonald’s official commented that they were “broadly some have suggested that it was at best a Pyrrhic victory.11 The case became a public relations disaster around the world, thanks in large part to the Internet, which now has a very active anti-McDonald’s website. The site displays the offending pamphlet as well as even more derogatory comments about McDonald’s, including some allegations from other sources that McDonald’s had previously successfully suppressed by threats of law suits. ce.17 When Justice Bell finally released his judgment, it included some rather detrimental conclusions about McDonald’s business practices. Although Justice Bell found in favor of McDonald’s on nearly all of their claims, he did reject a few. He concluded that McDonald’s had contributed to cruelty to animals, used advertising to manipulate children, and paid employees so little as to depress wages in the catering industry in England. These findings were prominently reported in numerous articles describing the judgement. The statements found to be defamatory included assertions in the pamphlet that McDonald’s was destroying rain forests; causing starvation in the Third World; producing litter in cities; causing heart disease, cancer and food poisoning; subjecting employees to “bad” working conditions; exploiting women and minority workers; and covering up the low quality of their food with advertising gimmicks aimed at children.19 Morris and Steel faced formidable obstacles under English law due to a combination of restrictive substantive libel laws, denial of a jury trial, the potential for very high damage awards and legal costs,43 and a lack of legal aid. The exclusion of defamation actions from the otherwise rather generous legal aid system in England44 is based on the fear of frivolous petty suits.45 Reformers have convincingly disputed this rationale;46 but in any event, the McDonald’s case aptly demonstrates the severe disadvantage the rule imposes on defendants of modest means being sued by affluent plaintiffs. England’s strict liability libel law contributes to its reputation as a haven for libel plaintiffs. The $96,000 award given by Justice Bell in McDonald’s was, by these standards, quite modest. But a company with annual earnings of $32 billion does not sue defendants like Morris and Steel for the money. They were suing to stop the criticism and deter future critics, and their track record in squelching criticism by threatened civil suits in England had been quite good until they faced the “McDonald’s 2.] |
Provide your answer in full sentences, referencing the document using quotations. | According to the 2010 COSTCO Code of Ethics as found in the Employee Agreement document, how can COSTCO employees earn the trust of COSTCO members? | **COSTCO Code of Ethics from Employee Agreement -- 2010**
Our Mission
To continually provide our members with quality goods and services at the lowest
possible prices.
In order to achieve our mission we will conduct our business with the following Code of Ethics in
mind:
Our Code of Ethics
1. Obey the law.
2. Take care of our members.
3. Take care of our employees.
4. Respect our suppliers.
If we do these four things throughout our organization, then we will achieve our ultimate goal,
which is to:
5. Reward our shareholders.
Costco’s Code of Ethics
1. Obey the law
The law is irrefutable! Absent a moral imperative to challenge a law, we must
conduct our business in total compliance with the laws of every community
where we do business. We pledge to:
Comply with all laws and other legal requirements.
Respect all public officials and their positions.
Comply with safety and security standards for all products sold.
Alert management if we observe illegal workplace misconduct by other employees.
Exceed ecological standards required in every community where we do business.
Comply with all applicable wage and hour laws.
Comply with all applicable antitrust laws.
Conduct business in and with foreign countries in a manner that is legal and proper under
United States and foreign laws.
Not offer or give any form of bribe or kickback or other thing of value to any person or pay
to obtain or expedite government action or otherwise act in violation of the Foreign
Corrupt Practices Act or the laws of other countries.
Not request or receive any bribe or kickback.
Promote fair, accurate, timely, and understandable disclosure in reports filed with the
Securities and Exchange Commission and in other public communications by the
Company.
1.0
Costco Mission Statement and Code of Ethics – updated March 2010
2. Take care of our members
Costco membership is open to business owners, as well as individuals. Our members are our
reason for being – the key to our success. If we don’t keep our members happy, little else that
we do will make a difference. There are plenty of shopping alternatives for our members and if
they fail to show up, we cannot survive. Our members have extended a trust to Costco by virtue
of paying a fee to shop with us. We will succeed only if we do not violate the trust they have
extended to us, and that trust extends to every area of our business. To continue to earn their
trust, we pledge to:
Provide top-quality products at the best prices in the market.
Provide high quality, safe and wholesome food products by requiring that both suppliers
and employees be in compliance with the highest food safety standards in the industry.
Provide our members with a 100% satisfaction guarantee on every product and service
we sell, including their membership fee.
Assure our members that every product we sell is authentic in make and in
representation of performance.
Make our shopping environment a pleasant experience by making our members feel
welcome as our guests.
Provide products to our members that will be ecologically sensitive.
Provide our members with the best customer service in the retail industry.
Give back to our communities through employee volunteerism and employee and
corporate contributions to United Way and Children’s Hospitals.
3. Take care of our employees
Our employees are our most important asset. We believe we have the very best employees in
the warehouse club industry, and we are committed to providing them with rewarding challenges
and ample opportunities for personal and career growth. We pledge to provide our employees
with:
Competitive wages
Great benefits
A safe and healthy work environment
Challenging and fun work
Career opportunities
An atmosphere free from harassment or discrimination
An Open Door Policy that allows access to ascending levels of management to resolve
issues
Opportunities to give back to their communities through volunteerism and fund-raising
1.0
Career Opportunities at Costco:
Costco is committed to promoting from within the Company. The majority of our current
management team members (including Warehouse, Merchandise, Administrative,
Membership, Front End and Receiving Managers) are “home grown.”
Our growth plans remain very aggressive and our need for qualified, experienced
employees to fill supervisory and management positions remains great.
Today we have Location Managers and Vice Presidents who were once Stockers and
Callers or who started in clerical positions for Costco. We believe that Costco’s future
Costco Mission Statement and Code of Ethics – updated March 2010
executive officers are currently working in our warehouses, depots and buying offices, as
well as in our Home Office.
4. Respect our suppliers
Our suppliers are our partners in business and for us to
prosper as a company, they must prosper with us. To that
end, we strive to:
Treat all suppliers and their representatives as we would expect to be treated if visiting
their places of business.
Honor all commitments.
Protect all suppliers’ property assigned to Costco as though it were our own.
Not accept gratuities of any kind from a supplier.
These guidelines are exactly that – guidelines – some common sense rules for the conduct of
our business. At the core of our philosophy as a company is the implicit understanding that all of
us, employees and management alike, must conduct ourselves in an honest and ethical manner
every day. Dishonest conduct will not be tolerated. To do any less would be unfair to the
overwhelming majority of our employees who support and respect Costco’s commitment to
ethical business conduct. Our employees must avoid actual or apparent conflicts of interest,
including creating a business in competition with the Company or working for or on behalf of
another employer in competition with the Company. If you are ever in doubt as to what course of
action to take on a business matter that is open to varying ethical interpretations, TAKE THE
HIGH ROAD AND DO WHAT IS RIGHT.
If we follow the four principles of our Code of Ethics throughout our organization, then we will
achieve our fifth principle and ultimate goal, which is to:
1.0
5. Reward our shareholders
As a company with stock that is traded publicly on the NASDAQ Stock Market, our
shareholders are our business partners.
We can only be successful so long as we are providing them with a good return on the
money they invest in our Company.
This, too, involves the element of trust. They trust us to use their investment wisely and to
operate our business in such a way that it is profitable.
Over the years Costco has been in business, we have consistently followed an upward
trend in the value of our stock. Yes, we have had our ups and our downs, but the overall
trend has been consistently up.
We believe Costco stock is a good investment, and we pledge to operate our Company in
such a way that our present and future stockholders, as well as our employees, will be
rewarded for our efforts.
Costco Mission Statement and Code of Ethics – updated March 2010
Reporting of Violations and Enforcement
1. The Code of Ethics applies to all directors, officers, and employees of the Company.
Conduct that violates the Code of Ethics will constitute grounds for disciplinary action,
ranging from reprimand to termination and possible criminal prosecution.
2. All employees are expected to promptly report actual or suspected violations of law or the
Code of Ethics. Federal law, other laws and Costco policy protect employees from
retaliation if complaints are made in good faith. Violations involving employees should be
reported to the responsible Executive Vice President, who shall be responsible for taking
prompt and appropriate action to investigate and respond. Other violations (such as
those involving suppliers) and those involving accounting, internal control and auditing
should be reported to the general Counsel or the Chief Compliance Officer (999 Lake
Drive, Issaquah, WA 98027), who shall be responsible for taking prompt and appropriate
action to investigate and respond. Reports or complaints can also be made, confidentially
if you choose, through the Whistleblower Policy link on the Company’s eNet or Intranet
site.
1.0
What do Costco’s Mission Statement and Code of Ethics have to do with you?
EVERYTHING!
The continued success of our Company depends on how well each of Costco’s employees
adheres to the high standards mandated by our Code of Ethics. And a successful company
means increased opportunities for success and advancement for each of you.
No matter what your current job, you can put Costco’s Code of Ethics to work every day. It’s
reflected in the energy and enthusiasm you bring to work, in the relationships you build with your
management, your co-workers, our suppliers and our members.
By always choosing to do the right thing, you will build your own self-esteem, increase your
chances for success and make Costco more successful, too. It is the synergy of ideas and
talents, each of us working together and contributing our best, which makes Costco the great
company it is today and lays the groundwork for what we will be tomorrow. | <context>
==================
**COSTCO Code of Ethics from Employee Agreement -- 2010**
Our Mission
To continually provide our members with quality goods and services at the lowest
possible prices.
In order to achieve our mission we will conduct our business with the following Code of Ethics in
mind:
Our Code of Ethics
1. Obey the law.
2. Take care of our members.
3. Take care of our employees.
4. Respect our suppliers.
If we do these four things throughout our organization, then we will achieve our ultimate goal,
which is to:
5. Reward our shareholders.
Costco’s Code of Ethics
1. Obey the law
The law is irrefutable! Absent a moral imperative to challenge a law, we must
conduct our business in total compliance with the laws of every community
where we do business. We pledge to:
Comply with all laws and other legal requirements.
Respect all public officials and their positions.
Comply with safety and security standards for all products sold.
Alert management if we observe illegal workplace misconduct by other employees.
Exceed ecological standards required in every community where we do business.
Comply with all applicable wage and hour laws.
Comply with all applicable antitrust laws.
Conduct business in and with foreign countries in a manner that is legal and proper under
United States and foreign laws.
Not offer or give any form of bribe or kickback or other thing of value to any person or pay
to obtain or expedite government action or otherwise act in violation of the Foreign
Corrupt Practices Act or the laws of other countries.
Not request or receive any bribe or kickback.
Promote fair, accurate, timely, and understandable disclosure in reports filed with the
Securities and Exchange Commission and in other public communications by the
Company.
1.0
Costco Mission Statement and Code of Ethics – updated March 2010
2. Take care of our members
Costco membership is open to business owners, as well as individuals. Our members are our
reason for being – the key to our success. If we don’t keep our members happy, little else that
we do will make a difference. There are plenty of shopping alternatives for our members and if
they fail to show up, we cannot survive. Our members have extended a trust to Costco by virtue
of paying a fee to shop with us. We will succeed only if we do not violate the trust they have
extended to us, and that trust extends to every area of our business. To continue to earn their
trust, we pledge to:
Provide top-quality products at the best prices in the market.
Provide high quality, safe and wholesome food products by requiring that both suppliers
and employees be in compliance with the highest food safety standards in the industry.
Provide our members with a 100% satisfaction guarantee on every product and service
we sell, including their membership fee.
Assure our members that every product we sell is authentic in make and in
representation of performance.
Make our shopping environment a pleasant experience by making our members feel
welcome as our guests.
Provide products to our members that will be ecologically sensitive.
Provide our members with the best customer service in the retail industry.
Give back to our communities through employee volunteerism and employee and
corporate contributions to United Way and Children’s Hospitals.
3. Take care of our employees
Our employees are our most important asset. We believe we have the very best employees in
the warehouse club industry, and we are committed to providing them with rewarding challenges
and ample opportunities for personal and career growth. We pledge to provide our employees
with:
Competitive wages
Great benefits
A safe and healthy work environment
Challenging and fun work
Career opportunities
An atmosphere free from harassment or discrimination
An Open Door Policy that allows access to ascending levels of management to resolve
issues
Opportunities to give back to their communities through volunteerism and fund-raising
1.0
Career Opportunities at Costco:
Costco is committed to promoting from within the Company. The majority of our current
management team members (including Warehouse, Merchandise, Administrative,
Membership, Front End and Receiving Managers) are “home grown.”
Our growth plans remain very aggressive and our need for qualified, experienced
employees to fill supervisory and management positions remains great.
Today we have Location Managers and Vice Presidents who were once Stockers and
Callers or who started in clerical positions for Costco. We believe that Costco’s future
Costco Mission Statement and Code of Ethics – updated March 2010
executive officers are currently working in our warehouses, depots and buying offices, as
well as in our Home Office.
4. Respect our suppliers
Our suppliers are our partners in business and for us to
prosper as a company, they must prosper with us. To that
end, we strive to:
Treat all suppliers and their representatives as we would expect to be treated if visiting
their places of business.
Honor all commitments.
Protect all suppliers’ property assigned to Costco as though it were our own.
Not accept gratuities of any kind from a supplier.
These guidelines are exactly that – guidelines – some common sense rules for the conduct of
our business. At the core of our philosophy as a company is the implicit understanding that all of
us, employees and management alike, must conduct ourselves in an honest and ethical manner
every day. Dishonest conduct will not be tolerated. To do any less would be unfair to the
overwhelming majority of our employees who support and respect Costco’s commitment to
ethical business conduct. Our employees must avoid actual or apparent conflicts of interest,
including creating a business in competition with the Company or working for or on behalf of
another employer in competition with the Company. If you are ever in doubt as to what course of
action to take on a business matter that is open to varying ethical interpretations, TAKE THE
HIGH ROAD AND DO WHAT IS RIGHT.
If we follow the four principles of our Code of Ethics throughout our organization, then we will
achieve our fifth principle and ultimate goal, which is to:
1.0
5. Reward our shareholders
As a company with stock that is traded publicly on the NASDAQ Stock Market, our
shareholders are our business partners.
We can only be successful so long as we are providing them with a good return on the
money they invest in our Company.
This, too, involves the element of trust. They trust us to use their investment wisely and to
operate our business in such a way that it is profitable.
Over the years Costco has been in business, we have consistently followed an upward
trend in the value of our stock. Yes, we have had our ups and our downs, but the overall
trend has been consistently up.
We believe Costco stock is a good investment, and we pledge to operate our Company in
such a way that our present and future stockholders, as well as our employees, will be
rewarded for our efforts.
Costco Mission Statement and Code of Ethics – updated March 2010
Reporting of Violations and Enforcement
1. The Code of Ethics applies to all directors, officers, and employees of the Company.
Conduct that violates the Code of Ethics will constitute grounds for disciplinary action,
ranging from reprimand to termination and possible criminal prosecution.
2. All employees are expected to promptly report actual or suspected violations of law or the
Code of Ethics. Federal law, other laws and Costco policy protect employees from
retaliation if complaints are made in good faith. Violations involving employees should be
reported to the responsible Executive Vice President, who shall be responsible for taking
prompt and appropriate action to investigate and respond. Other violations (such as
those involving suppliers) and those involving accounting, internal control and auditing
should be reported to the general Counsel or the Chief Compliance Officer (999 Lake
Drive, Issaquah, WA 98027), who shall be responsible for taking prompt and appropriate
action to investigate and respond. Reports or complaints can also be made, confidentially
if you choose, through the Whistleblower Policy link on the Company’s eNet or Intranet
site.
1.0
What do Costco’s Mission Statement and Code of Ethics have to do with you?
EVERYTHING!
The continued success of our Company depends on how well each of Costco’s employees
adheres to the high standards mandated by our Code of Ethics. And a successful company
means increased opportunities for success and advancement for each of you.
No matter what your current job, you can put Costco’s Code of Ethics to work every day. It’s
reflected in the energy and enthusiasm you bring to work, in the relationships you build with your
management, your co-workers, our suppliers and our members.
By always choosing to do the right thing, you will build your own self-esteem, increase your
chances for success and make Costco more successful, too. It is the synergy of ideas and
talents, each of us working together and contributing our best, which makes Costco the great
company it is today and lays the groundwork for what we will be tomorrow.
<task instructions>
==================
Provide your answer in full sentences, referencing the document using quotations.
<question>
==================
According to the 2010 COSTCO Code of Ethics as found in the Employee Agreement document, how can COSTCO employees earn the trust of COSTCO members? |
Respond only using the information within the provided text block. You must provide a direct answer to the question asked and format your reply in a paragraph without any bullets, headers, or other extraneous formatting. Limit your reply to 50 words. | Please extract all acronyms and provide the full name for any and all acronyms found in the text. You can ignore any acronyms that is not explicitly defined. | Recent advances in generative AI systems, which are trained on large volumes of data to generate new
content that may mimic likenesses, voices, or other aspects of real people’s identities, have stimulated
congressional interest. Like the above-noted uses of AI to imitate Tom Hanks and George Carlin, the
examples below illustrate that some AI uses raise concerns under both ROP laws and myriad other laws.
One example of AI’s capability to imitate voices was an AI-generated song called “Heart on My Sleeve,”
which sounded like it was sung by the artist Drake and was heard by millions of listeners in 2023.
Simulating an artist’s voice in this manner could make one liable under ROP laws, although these laws
Congressional Research Service 4
differ as to whether they cover voice imitations or vocal styles as opposed to the artist’s actual voice.
Voice imitations are not, however, prohibited by copyright laws. For example, the alleged copyright
violation that caused YouTube to remove “Heart on My Sleeve”—namely, that it sampled another
recording without permission—was unrelated to the Drake voice imitation. In August 2023, Google and
Universal Music were in discussions to license artists’ melodies and voices for AI-generated songs.
The potential for AI to replicate both voices and likenesses was also a point of contention in last year’s
negotiations for a collective bargaining agreement between the Screen Actors Guild-American Federation
of Television and Radio Artists (SAG-AFTRA)—a union that represents movie, television, and radio
actors—and television and movie studios, including streaming services. SAG-AFTRA expressed concern
that AI could be used to alter or replace actors’ performances without their permission, such as by using
real film recordings to train AI to create “digital replicas” of actors and voice actors. The Memorandum of
Agreement between SAG-AFTRA and studios approved in December 2023 requires studios to obtain
“clear and conspicuous” consent from an actor or background actor to create or use a digital replica of the
actor or to digitally alter the actor’s performance, with certain exceptions. It also requires that the actor’s
consent for use of a digital replica or digital alterations be based on a “reasonably specific description” of
the intended use or alteration. The agreement provides that consent continues after the actor’s death
unless “explicitly limited,” while consent for additional postmortem uses must be obtained from the
actor’s authorized representative or—if a representative cannot be identified or located—from the union.
In January 2024, SAG-AFTRA announced it had also reached an agreement with a voice technology
company regarding voice replicas for video games, while a negotiation to update SAG-AFTRA’s
agreement with video game publishers is reportedly ongoing.
Commentators have also raised concern with deceptive AI-generated or AI-altered content known as
“deepfakes,” including some videos with sexually explicit content and others meant to denigrate public
officials. To the extent this content includes real people’s NIL and is used commercially, ROP laws might
provide a remedy. Where deepfakes are used to promote products or services—such as the AI replica of
Tom Hanks used in a dental plan ad—they may also constitute false endorsement under the Lanham Act.
In addition to these laws, some states have enacted laws prohibiting sexually explicit deepfakes, with
California and New York giving victims a civil claim and Georgia and Virginia imposing criminal
liability. In addition, Section 1309 of the federal Violence Against Women Act Reauthorization Act of
2022 (VAWA 2022) provides a civil claim for nonconsensual disclosure of “intimate visual depictions,”
which might be interpreted to prohibit intimate deepfakes—as might some states’ “revenge porn” laws. A
bill introduced in the House of Representatives in May 2023, the Preventing Deepfakes of Intimate
Images Act, H.R. 3106, would amend VAWA 2022 by creating a separate civil claim for disclosing certain
“intimate digital depictions” without the written consent of the depicted individual, as well as providing
criminal liability for certain actual or threatened disclosures. Deepfakes may also give rise to liability
under state defamation laws where a party uses them to communicate reputation-damaging falsehoods
about a person with a requisite degree of fault.
Regarding the use of AI in political advertisements, some proposed legislation would prohibit deepfakes
or require disclaimers for them in federal campaigns, although such proposals may raise First Amendment
concerns. The Protect Elections from Deceptive AI Act, S. 2770 (118th Cong.), for instance, would ban
the use of AI to generate materially deceptive content falsely depicting federal candidates in political ads
to influence federal elections, while excluding news, commentary, satires, and parodies from liability.
Google announced that, as of mid-November 2023, verified election advertisers on its platform “must
prominently disclose when their ads contain synthetic content that inauthentically depicts real or realisticlooking people or events.”
Another concern some commentators raise is that AI-generated material might be falsely attributed to real
persons without their permission. One writer who focuses on the publishing industry, for instance, found
that books apparently generated by AI were being sold under her name on Amazon. Although the
Congressional Research Service 5
company ultimately removed these titles, the writer claimed that her “initial infringement claim with
Amazon went nowhere,” since her name was not trademarked and the books did not infringe existing
copyrights. As she noted, however, this scenario might give rise to claims under state ROP laws as well as
the Lanham Act. In addition, the Federal Trade Commission (FTC) states that “books sold as if authored
by humans but in fact reflecting the output of [AI]” violate the FTC Act and may result in civil fines.
It is unclear how Section 230 of the Communications Act of 1934 might apply when ROP-infringing
content from a third party, including content made with AI, is disseminated through social media and
other interactive computer services. Although the law generally bars any lawsuits that would hold online
service providers and users liable for third party content, there is an exception allowing lawsuits under
“any law pertaining to intellectual property.” Courts differ as to whether state ROP laws and the Lanham
Act’s prohibition on false endorsement are laws “pertaining to” IP within the meaning of Section 230.
Another Legal Sidebar discusses the application of Section 230 to generative AI more broadly.
Considerations for Congress
Some commentators have called for federal ROP legislation to provide more uniform and predictable
protection for the ROP in the United States. Others have argued that Congress should leave ROP
protection to the states on federalism grounds. If Congress decides to craft federal ROP legislation, it
might consider the scope of the ROP protections it seeks to enact, the effect of those enactments on state
ROP laws, and constitutional authorities and limitations on Congress’s power to enact ROP protections.
As noted below, some Members have proposed legislation that would prohibit certain unauthorized uses
of digital replicas or depictions of individuals while leaving state ROP laws in place. | Respond only using the information within the provided text block. You must provide a direct answer to the question asked and format your reply in a paragraph without any bullets, headers, or other extraneous formatting. Limit your reply to 50 words.
Please extract all acronyms and provide the full name for any and all acronyms found in the text. You can ignore any acronyms that is not explicitly defined.
Recent advances in generative AI systems, which are trained on large volumes of data to generate new
content that may mimic likenesses, voices, or other aspects of real people’s identities, have stimulated
congressional interest. Like the above-noted uses of AI to imitate Tom Hanks and George Carlin, the
examples below illustrate that some AI uses raise concerns under both ROP laws and myriad other laws.
One example of AI’s capability to imitate voices was an AI-generated song called “Heart on My Sleeve,”
which sounded like it was sung by the artist Drake and was heard by millions of listeners in 2023.
Simulating an artist’s voice in this manner could make one liable under ROP laws, although these laws
Congressional Research Service 4
differ as to whether they cover voice imitations or vocal styles as opposed to the artist’s actual voice.
Voice imitations are not, however, prohibited by copyright laws. For example, the alleged copyright
violation that caused YouTube to remove “Heart on My Sleeve”—namely, that it sampled another
recording without permission—was unrelated to the Drake voice imitation. In August 2023, Google and
Universal Music were in discussions to license artists’ melodies and voices for AI-generated songs.
The potential for AI to replicate both voices and likenesses was also a point of contention in last year’s
negotiations for a collective bargaining agreement between the Screen Actors Guild-American Federation
of Television and Radio Artists (SAG-AFTRA)—a union that represents movie, television, and radio
actors—and television and movie studios, including streaming services. SAG-AFTRA expressed concern
that AI could be used to alter or replace actors’ performances without their permission, such as by using
real film recordings to train AI to create “digital replicas” of actors and voice actors. The Memorandum of
Agreement between SAG-AFTRA and studios approved in December 2023 requires studios to obtain
“clear and conspicuous” consent from an actor or background actor to create or use a digital replica of the
actor or to digitally alter the actor’s performance, with certain exceptions. It also requires that the actor’s
consent for use of a digital replica or digital alterations be based on a “reasonably specific description” of
the intended use or alteration. The agreement provides that consent continues after the actor’s death
unless “explicitly limited,” while consent for additional postmortem uses must be obtained from the
actor’s authorized representative or—if a representative cannot be identified or located—from the union.
In January 2024, SAG-AFTRA announced it had also reached an agreement with a voice technology
company regarding voice replicas for video games, while a negotiation to update SAG-AFTRA’s
agreement with video game publishers is reportedly ongoing.
Commentators have also raised concern with deceptive AI-generated or AI-altered content known as
“deepfakes,” including some videos with sexually explicit content and others meant to denigrate public
officials. To the extent this content includes real people’s NIL and is used commercially, ROP laws might
provide a remedy. Where deepfakes are used to promote products or services—such as the AI replica of
Tom Hanks used in a dental plan ad—they may also constitute false endorsement under the Lanham Act.
In addition to these laws, some states have enacted laws prohibiting sexually explicit deepfakes, with
California and New York giving victims a civil claim and Georgia and Virginia imposing criminal
liability. In addition, Section 1309 of the federal Violence Against Women Act Reauthorization Act of
2022 (VAWA 2022) provides a civil claim for nonconsensual disclosure of “intimate visual depictions,”
which might be interpreted to prohibit intimate deepfakes—as might some states’ “revenge porn” laws. A
bill introduced in the House of Representatives in May 2023, the Preventing Deepfakes of Intimate
Images Act, H.R. 3106, would amend VAWA 2022 by creating a separate civil claim for disclosing certain
“intimate digital depictions” without the written consent of the depicted individual, as well as providing
criminal liability for certain actual or threatened disclosures. Deepfakes may also give rise to liability
under state defamation laws where a party uses them to communicate reputation-damaging falsehoods
about a person with a requisite degree of fault.
Regarding the use of AI in political advertisements, some proposed legislation would prohibit deepfakes
or require disclaimers for them in federal campaigns, although such proposals may raise First Amendment
concerns. The Protect Elections from Deceptive AI Act, S. 2770 (118th Cong.), for instance, would ban
the use of AI to generate materially deceptive content falsely depicting federal candidates in political ads
to influence federal elections, while excluding news, commentary, satires, and parodies from liability.
Google announced that, as of mid-November 2023, verified election advertisers on its platform “must
prominently disclose when their ads contain synthetic content that inauthentically depicts real or realisticlooking people or events.”
Another concern some commentators raise is that AI-generated material might be falsely attributed to real
persons without their permission. One writer who focuses on the publishing industry, for instance, found
that books apparently generated by AI were being sold under her name on Amazon. Although the
Congressional Research Service 5
company ultimately removed these titles, the writer claimed that her “initial infringement claim with
Amazon went nowhere,” since her name was not trademarked and the books did not infringe existing
copyrights. As she noted, however, this scenario might give rise to claims under state ROP laws as well as
the Lanham Act. In addition, the Federal Trade Commission (FTC) states that “books sold as if authored
by humans but in fact reflecting the output of [AI]” violate the FTC Act and may result in civil fines.
It is unclear how Section 230 of the Communications Act of 1934 might apply when ROP-infringing
content from a third party, including content made with AI, is disseminated through social media and
other interactive computer services. Although the law generally bars any lawsuits that would hold online
service providers and users liable for third party content, there is an exception allowing lawsuits under
“any law pertaining to intellectual property.” Courts differ as to whether state ROP laws and the Lanham
Act’s prohibition on false endorsement are laws “pertaining to” IP within the meaning of Section 230.
Another Legal Sidebar discusses the application of Section 230 to generative AI more broadly.
Considerations for Congress
Some commentators have called for federal ROP legislation to provide more uniform and predictable
protection for the ROP in the United States. Others have argued that Congress should leave ROP
protection to the states on federalism grounds. If Congress decides to craft federal ROP legislation, it
might consider the scope of the ROP protections it seeks to enact, the effect of those enactments on state
ROP laws, and constitutional authorities and limitations on Congress’s power to enact ROP protections.
As noted below, some Members have proposed legislation that would prohibit certain unauthorized uses
of digital replicas or depictions of individuals while leaving state ROP laws in place. |
Respond using only the information contained within this prompt. | According to this letter to shareholders, what was launched in Germany in 2023 to benefit the Motors P&A business? | Dear Stockholders,
This past year has been a transformative one for eBay: We delivered solid results, and in our continued
pursuit to drive long-term sustainable growth, we’ve set an even more ambitious vision — to reinvent
the future of ecommerce for enthusiasts.
We made significant progress against our goals, with improvements in organic FX-Neutral and
as-reported year-over-year GMV growth during each quarter of 2023. For the full year, revenue was
up 3% organically, we generated approximately $2 billion of free cash flow, and we returned over
$1.9 billion to stockholders through repurchases and dividends. Based on these results, we are
confident that our strategy is the right one and that we are on the path to build a stronger, more
resilient company.
Since I rejoined eBay as CEO four years ago, we’ve renewed our focus on offering meaningful choices
and value, and building trust with our global community of sellers and buyers. The pace of innovation
at eBay has accelerated, and we have pivoted to a full-funnel marketing approach aimed at attracting
and retaining enthusiast customers.
In 2023, we raised the bar further to enhance the end-to-end experience for our customers and drive
growth for stockholders by leveraging three key strategic pillars: relevant experiences, scalable
solutions, and magical innovations.
As we navigated a dynamic macroeconomic environment, we set our organization up for speed
and prioritized initiatives that we believe will have an outsized impact on our customers, community,
and stockholders.
Relevant Experiences
We are focused on solving the specific and ever-changing needs of our sellers and buyers across all
shopping occasions. Through our Focus Category playbook, we have seen a meaningful improvement
in our growth relative to the market in every category we’ve invested in to date.
In 2023, Focus Categories grew by 4% year-over-year on an FX-neutral basis, outpacing the
remainder of our business by roughly seven points. We exited the year with Focus Categories making
up nearly 30% of our business, and we will continue to expand to new categories in 2024. Our
investments in Focus Categories led to numerous improvements in the overall customer experience
on eBay last year, including:
CEO Letter to Stockholders
2023
2
• We launched eBay Guaranteed Fit in the U.S. and similar programs in the UK and Germany to
benefit our Motors Parts & Accessories (P&A) business, assuring buyers that eBay will stand
behind them if a part doesn’t fit their vehicle. These programs are underpinned by multiple years of
investment in P&A technology, have delivered a game-changing level of trust for buyers, and have
yielded measurable uplift in conversion for sellers.
• We launched the Certified by Brand program with over 30 brands offering new and certified preowned inventory, bringing an enhanced level of trust to the watch, jewelry, and handbag categories.
• Our eBay Refurbished program continues to outperform as consumers turn to eBay for
sustainability and value in the current economic environment. eBay Refurbished was one of our
fastest growing Focus Categories in 2023, posting healthy double-digit, FX-Neutral GMV growth
for the full year. Last year, we added dozens of new categories to the program, signed up more
brands and OEMs to sell refurbished inventory directly on eBay, and made onboarding for small
business sellers faster and more scalable to increase the amount of great refurbished inventory
available to buyers.
In addition to Focus Categories, we’re investing in country-specific experiences so that our
marketplace is more attuned to the needs of local sellers and buyers. Last year, we made a significant
investment in Germany, our third largest market as measured by demand, adopting a similar approach
to our vertical playbook:
• We removed some of the biggest hurdles for sellers and introduced a number of features to
address the unique needs of German consumers, including search and SEO enhancements,
shipping and return label improvements, and a complete overhaul of the local pickup experience.
Additionally, we eliminated final value fees for German C2C sellers on domestic transactions to
stimulate our sell-to-buy flywheel in the country.
• Over the past year, C2C seller NPS and customer satisfaction have increased by 20 points or more,
buyers who sell returned to positive growth, unpaid items have been cut in half for local pickups,
and C2C volume in Germany has returned to positive growth.
• Notably, these investments have made our business significantly more resilient to the challenging
macroenvironment in Germany and have resulted in hundreds of millions of dollars of incremental
GMV relative to our prior trajectory.
CEO Letter to Stockholders
2023
3
Finally, we continued to improve the selling and buying experiences with horizontal enhancements
in 2023:
• We invested further in new capabilities for Search, such as deep learning and visual similarity
to improve ranking and retrieval, reducing queries with low or null results to surface more of our
amazing inventory for customers.
• We began our work in modernizing the buying experience on eBay by rolling out an enhanced
View Item page, which features a streamlined appearance, larger and higher-resolution images,
and an optimized information hierarchy. This update has contributed to a measurable uplift in
GMV versus our previous design, and our work to modernize the overall shopping experience will
continue in 2024.
Scalable Solutions
eBay’s scale is one of our most powerful assets, with over 28 years of data, 132 million buyers, and
nearly 2 billion live listings in 190 markets around the world at the end of 2023. With the foundational
capabilities we developed using AI last year, we will continue to invest in unlocking the power of
our data assets to fuel next-gen ecommerce experiences that we are confident will exceed our
customers’ expectations and set a new standard for the industry.
Our advertising business continued to show strong double-digit growth rates in 2023, driving our
ability to invest meaningfully into the rest of the business. Our advertising platform, which surpassed
900 million live Promoted Listings in 2023, helps sellers achieve greater velocity and price realization
for their inventory.
• We generated over $1.4 billion of total advertising revenue, up roughly 25% on an FX-Neutral basis
for the year and more than double our advertising revenue in 2019.
• Promoted Listings Standard, our cost-per-acquisition product, remains our largest contributor to
advertising revenue due to its simplicity and proven return on ad spend.
• And Promoted Listings Advanced, our cost-per-click format, was among the fastest-growing
products in our advertising portfolio on a year-over-year basis. Over the course of the year, we
significantly enhanced this product with customized keywords and bidding structures, and by
using AI to optimize campaigns.
Also, the payments platform processed approximately $70 billion of volume in 2023 while enabling
transactions between millions of eBay sellers and buyers globally. We continued to scale our
financial services offerings like FX conversion and alternative methods to pay and get paid. We also
meaningfully improved our identity and risk management capabilities that enhance customer value
CEO Letter to Stockholders
2023
4
and marketplace trust. For instance, investments in the eBay checkout experience and in-house
risk modeling enhancements aimed at reducing transaction friction have measurably improved
conversion on our marketplace and contributed nearly $1 billion in incremental GMV in 2023.
Lastly, our global scale and cross-border capabilities enabled us to launch the eBay International
Shipping (eIS) program, which makes trade more seamless and cost effective for sellers and buyers
worldwide by simplifying the complexities of taxes, duties, and returns.
• We introduced combined shipping for eIS, allowing buyers to order multiple items from an
international seller and pay one consolidated shipping fee.
• We ended 2023 with over 400 million live listings from U.S. sellers shippable to international
buyers, and sellers have had an overwhelmingly positive response to the program, with customer
satisfaction ratings over 40 points higher than the previous global shipping program eIS replaced. | System instruction: Respond using only the information contained within this prompt.
question: According to this letter to shareholders, what was launched in Germany in 2023 to benefit the Motors P&A business?
context: Dear Stockholders,
This past year has been a transformative one for eBay: We delivered solid results, and in our continued
pursuit to drive long-term sustainable growth, we’ve set an even more ambitious vision — to reinvent
the future of ecommerce for enthusiasts.
We made significant progress against our goals, with improvements in organic FX-Neutral and
as-reported year-over-year GMV growth during each quarter of 2023. For the full year, revenue was
up 3% organically, we generated approximately $2 billion of free cash flow, and we returned over
$1.9 billion to stockholders through repurchases and dividends. Based on these results, we are
confident that our strategy is the right one and that we are on the path to build a stronger, more
resilient company.
Since I rejoined eBay as CEO four years ago, we’ve renewed our focus on offering meaningful choices
and value, and building trust with our global community of sellers and buyers. The pace of innovation
at eBay has accelerated, and we have pivoted to a full-funnel marketing approach aimed at attracting
and retaining enthusiast customers.
In 2023, we raised the bar further to enhance the end-to-end experience for our customers and drive
growth for stockholders by leveraging three key strategic pillars: relevant experiences, scalable
solutions, and magical innovations.
As we navigated a dynamic macroeconomic environment, we set our organization up for speed
and prioritized initiatives that we believe will have an outsized impact on our customers, community,
and stockholders.
Relevant Experiences
We are focused on solving the specific and ever-changing needs of our sellers and buyers across all
shopping occasions. Through our Focus Category playbook, we have seen a meaningful improvement
in our growth relative to the market in every category we’ve invested in to date.
In 2023, Focus Categories grew by 4% year-over-year on an FX-neutral basis, outpacing the
remainder of our business by roughly seven points. We exited the year with Focus Categories making
up nearly 30% of our business, and we will continue to expand to new categories in 2024. Our
investments in Focus Categories led to numerous improvements in the overall customer experience
on eBay last year, including:
CEO Letter to Stockholders
2023
2
• We launched eBay Guaranteed Fit in the U.S. and similar programs in the UK and Germany to
benefit our Motors Parts & Accessories (P&A) business, assuring buyers that eBay will stand
behind them if a part doesn’t fit their vehicle. These programs are underpinned by multiple years of
investment in P&A technology, have delivered a game-changing level of trust for buyers, and have
yielded measurable uplift in conversion for sellers.
• We launched the Certified by Brand program with over 30 brands offering new and certified preowned inventory, bringing an enhanced level of trust to the watch, jewelry, and handbag categories.
• Our eBay Refurbished program continues to outperform as consumers turn to eBay for
sustainability and value in the current economic environment. eBay Refurbished was one of our
fastest growing Focus Categories in 2023, posting healthy double-digit, FX-Neutral GMV growth
for the full year. Last year, we added dozens of new categories to the program, signed up more
brands and OEMs to sell refurbished inventory directly on eBay, and made onboarding for small
business sellers faster and more scalable to increase the amount of great refurbished inventory
available to buyers.
In addition to Focus Categories, we’re investing in country-specific experiences so that our
marketplace is more attuned to the needs of local sellers and buyers. Last year, we made a significant
investment in Germany, our third largest market as measured by demand, adopting a similar approach
to our vertical playbook:
• We removed some of the biggest hurdles for sellers and introduced a number of features to
address the unique needs of German consumers, including search and SEO enhancements,
shipping and return label improvements, and a complete overhaul of the local pickup experience.
Additionally, we eliminated final value fees for German C2C sellers on domestic transactions to
stimulate our sell-to-buy flywheel in the country.
• Over the past year, C2C seller NPS and customer satisfaction have increased by 20 points or more,
buyers who sell returned to positive growth, unpaid items have been cut in half for local pickups,
and C2C volume in Germany has returned to positive growth.
• Notably, these investments have made our business significantly more resilient to the challenging
macroenvironment in Germany and have resulted in hundreds of millions of dollars of incremental
GMV relative to our prior trajectory.
CEO Letter to Stockholders
2023
3
Finally, we continued to improve the selling and buying experiences with horizontal enhancements
in 2023:
• We invested further in new capabilities for Search, such as deep learning and visual similarity
to improve ranking and retrieval, reducing queries with low or null results to surface more of our
amazing inventory for customers.
• We began our work in modernizing the buying experience on eBay by rolling out an enhanced
View Item page, which features a streamlined appearance, larger and higher-resolution images,
and an optimized information hierarchy. This update has contributed to a measurable uplift in
GMV versus our previous design, and our work to modernize the overall shopping experience will
continue in 2024.
Scalable Solutions
eBay’s scale is one of our most powerful assets, with over 28 years of data, 132 million buyers, and
nearly 2 billion live listings in 190 markets around the world at the end of 2023. With the foundational
capabilities we developed using AI last year, we will continue to invest in unlocking the power of
our data assets to fuel next-gen ecommerce experiences that we are confident will exceed our
customers’ expectations and set a new standard for the industry.
Our advertising business continued to show strong double-digit growth rates in 2023, driving our
ability to invest meaningfully into the rest of the business. Our advertising platform, which surpassed
900 million live Promoted Listings in 2023, helps sellers achieve greater velocity and price realization
for their inventory.
• We generated over $1.4 billion of total advertising revenue, up roughly 25% on an FX-Neutral basis
for the year and more than double our advertising revenue in 2019.
• Promoted Listings Standard, our cost-per-acquisition product, remains our largest contributor to
advertising revenue due to its simplicity and proven return on ad spend.
• And Promoted Listings Advanced, our cost-per-click format, was among the fastest-growing
products in our advertising portfolio on a year-over-year basis. Over the course of the year, we
significantly enhanced this product with customized keywords and bidding structures, and by
using AI to optimize campaigns.
Also, the payments platform processed approximately $70 billion of volume in 2023 while enabling
transactions between millions of eBay sellers and buyers globally. We continued to scale our
financial services offerings like FX conversion and alternative methods to pay and get paid. We also
meaningfully improved our identity and risk management capabilities that enhance customer value
CEO Letter to Stockholders
2023
4
and marketplace trust. For instance, investments in the eBay checkout experience and in-house
risk modeling enhancements aimed at reducing transaction friction have measurably improved
conversion on our marketplace and contributed nearly $1 billion in incremental GMV in 2023.
Lastly, our global scale and cross-border capabilities enabled us to launch the eBay International
Shipping (eIS) program, which makes trade more seamless and cost effective for sellers and buyers
worldwide by simplifying the complexities of taxes, duties, and returns.
• We introduced combined shipping for eIS, allowing buyers to order multiple items from an
international seller and pay one consolidated shipping fee.
• We ended 2023 with over 400 million live listings from U.S. sellers shippable to international
buyers, and sellers have had an overwhelmingly positive response to the program, with customer
satisfaction ratings over 40 points higher than the previous global shipping program eIS replaced. |
"================
<TEXT PASSAGE>
=======
[context document]
================
<QUESTION>
=======
[user request]
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." | As you know AI is advancing in every field, I'm bit interested in healthcare as it is my domain. How is generative AI being used in healthcare system? | Revolutionizing Healthcare: The Transformative Power of AI
May 17, 2024
By Kevin Riddleberger, Co-founder, DispatchHealth
Kevin Riddleberger, PA-C, MBA, co-founder and chief strategy officer at DispatchHealth.
As the co-founder of a pioneering digital health company and a seasoned physician associate, I have navigated the forefront of healthcare innovation over the last decade. Our journey has witnessed both incremental advancements and radical shifts, but none as transformative as the current wave led by Artificial Intelligence (AI). This technological revolution promises to redefine healthcare delivery and management, bringing profound changes that were once the realm of science fiction into today’s clinical practices. The impact on the healthcare society will be bigger than electricity, the computer or the internet by many multiples. Research published in the New England Journal of Medicine last year indicates that generative AI has improved patient outcomes by up to 45% in clinical trials, particularly in the treatment of chronic diseases such as diabetes and heart disease, through personalized medicine and management plans. While a report by McKinsey & Company predicts that generative AI could help reduce healthcare costs in the United States by up to $150 billion annually by 2026 through automation of administrative tasks and optimization of clinical workflows.
Just last year, I highlighted in a thought leadership piece a typical day in the life of a clinician leveraging generative AI models embedded in their daily workflow. Since then we have witnessed an explosion of venture capital in companies to the tune of billions of dollars due to immense impact on healthcare operations and drug discoveries. Generative AI models are crucial for achieving the Quintuple Aim of healthcare, enhancing care quality, provider satisfaction, and patient engagement while reducing costs and improving health populations.
The volume of medical literature published annually is overwhelming, with estimates suggesting it would take decades for a clinician to process a year’s worth of research. We have long surpassed the power of the human brain and need augmentation quickly.
The Expanding Role of AI in Healthcare
AI’s integration into healthcare is set to usher in transformative changes, including the development of personalized treatment plans tailored to individual genetic profiles and lifestyles, and virtual health assistants available 24/7, providing real-time, accurate medical advice. The expectation is that AI will manage over 85% of customer interactions in healthcare by 2025, reducing the need for human intervention and allowing healthcare professionals to focus more on patient care. This shift towards technology-dependent care teams emphasizes AI’s role as a partner in healthcare, enhancing our capabilities to serve and care. While technology won’t replace humans, it will become a more integral member of the care team. The future of care delivery will lie in a technology-dependent care team approach, where healthcare workers focus on their greatest comparative advantages over technology. In the quest for top-of-license care, clinician roles, decision making processes, and workflows will evolve by embedding this transformative technology.
Companies Leading the AI Healthcare Revolution
Eko Health: Known for its AI-powered cardiac monitoring tools, Eko has developed algorithms that significantly improve the detection of heart conditions in routine screenings, potentially reducing the rates of undiagnosed cardiac issues by up to 30%. Eko recently was awarded by the FDA the first AI to aid heart failure detection during routine check-ups using their stethoscopes.
Butterfly Network: Their portable ultrasound device, powered by AI, has democratized medical imaging, making it more accessible and affordable. Introducing AI-powered POCUS is proving to increase diagnostic speed and accuracy in point of care settings thus minimizing more expensive imaging studies required.
Abridge and Nuance These companies are at the forefront of conversational AI, significantly reducing the clerical burden on clinicians, with both platforms now seamlessly integrated into Epic systems. The technologies use AI to transcribe and structure medical conversations during patient visits, which helps in ensuring that crucial information is captured accurately and can be easily referenced later, reducing the 70+ hours of documentation per clinician every month.
Hippocratic AI: The product is a novel staffing marketplace where companies can “hire” auto-pilot generative AI-powered agents to conduct low-risk, non-diagnostic, patient-facing services to help solve the massive healthcare staffing crisis. The company’s vision is centered around how generative AI can provide super staffing and healthcare abundance and equity in our industry.
Glass Health: An emerging player, Glass Health uses AI to integrate various data sources to provide a holistic view of patient health, aiding in more comprehensive care planning and clinical decision support at the point of care.
Looking Forward: Embracing the Future of Medicine
AI is a strategy enabler, not a strategy in itself. Effective AI adopters in healthcare will prioritize integrated governance over isolated initiatives, using AI as a tool to support strategic endeavors and to incorporate data as a key competitive asset. While AI presents unprecedented opportunities for advancement, it also brings challenges such as data privacy concerns, the need for robust ethical frameworks to prevent bias, and the importance of maintaining the human touch in medicine. Addressing these issues is crucial as we integrate more AI tools into clinical settings. The American Academy of PAs (AAPA) has recently developed an AI task force to guide future legislation and ensure that the PA profession is safeguarded in future regulatory frameworks, and I am proud to be part of this esteemed group moving forward.
As we prepare for the upcoming annual AAPA conference in Houston, I look forward to engaging with healthcare professionals and leaders to discuss the future of medicine and AI’s role. The opportunity to hear from pioneers like Daniel Kraft, MD and leading a panel discussion on healthcare innovation will further our understanding and implementation of AI technologies.These events at the AAPA conference underscore the vibrant, dynamic nature of our profession and the central role that innovation plays in driving us forward. I am eager to share ideas with fellow thought leaders and continue pushing the boundaries of what is possible in healthcare.
As we stand on the brink of a technological revolution in healthcare, driven by artificial intelligence, our responsibilities are manifold. We must not only embrace AI and its capabilities but also guide its integration thoughtfully and ethically to enhance patient care and improve health outcomes. The promise of AI in healthcare is vast and exciting, and I am optimistic about the transformative changes we are about to witness. Let us step boldly into this future, equipped with knowledge, inspired by innovation, and committed to the betterment of patient care worldwide. Let’s not be afraid but rather be bold and embrace the evolution of technology to advance our industry and our profession. | "================
<TEXT PASSAGE>
=======
Revolutionizing Healthcare: The Transformative Power of AI
May 17, 2024
By Kevin Riddleberger, Co-founder, DispatchHealth
Kevin Riddleberger, PA-C, MBA, co-founder and chief strategy officer at DispatchHealth.
As the co-founder of a pioneering digital health company and a seasoned physician associate, I have navigated the forefront of healthcare innovation over the last decade. Our journey has witnessed both incremental advancements and radical shifts, but none as transformative as the current wave led by Artificial Intelligence (AI). This technological revolution promises to redefine healthcare delivery and management, bringing profound changes that were once the realm of science fiction into today’s clinical practices. The impact on the healthcare society will be bigger than electricity, the computer or the internet by many multiples. Research published in the New England Journal of Medicine last year indicates that generative AI has improved patient outcomes by up to 45% in clinical trials, particularly in the treatment of chronic diseases such as diabetes and heart disease, through personalized medicine and management plans. While a report by McKinsey & Company predicts that generative AI could help reduce healthcare costs in the United States by up to $150 billion annually by 2026 through automation of administrative tasks and optimization of clinical workflows.
Just last year, I highlighted in a thought leadership piece a typical day in the life of a clinician leveraging generative AI models embedded in their daily workflow. Since then we have witnessed an explosion of venture capital in companies to the tune of billions of dollars due to immense impact on healthcare operations and drug discoveries. Generative AI models are crucial for achieving the Quintuple Aim of healthcare, enhancing care quality, provider satisfaction, and patient engagement while reducing costs and improving health populations.
The volume of medical literature published annually is overwhelming, with estimates suggesting it would take decades for a clinician to process a year’s worth of research. We have long surpassed the power of the human brain and need augmentation quickly.
The Expanding Role of AI in Healthcare
AI’s integration into healthcare is set to usher in transformative changes, including the development of personalized treatment plans tailored to individual genetic profiles and lifestyles, and virtual health assistants available 24/7, providing real-time, accurate medical advice. The expectation is that AI will manage over 85% of customer interactions in healthcare by 2025, reducing the need for human intervention and allowing healthcare professionals to focus more on patient care. This shift towards technology-dependent care teams emphasizes AI’s role as a partner in healthcare, enhancing our capabilities to serve and care. While technology won’t replace humans, it will become a more integral member of the care team. The future of care delivery will lie in a technology-dependent care team approach, where healthcare workers focus on their greatest comparative advantages over technology. In the quest for top-of-license care, clinician roles, decision making processes, and workflows will evolve by embedding this transformative technology.
Companies Leading the AI Healthcare Revolution
Eko Health: Known for its AI-powered cardiac monitoring tools, Eko has developed algorithms that significantly improve the detection of heart conditions in routine screenings, potentially reducing the rates of undiagnosed cardiac issues by up to 30%. Eko recently was awarded by the FDA the first AI to aid heart failure detection during routine check-ups using their stethoscopes.
Butterfly Network: Their portable ultrasound device, powered by AI, has democratized medical imaging, making it more accessible and affordable. Introducing AI-powered POCUS is proving to increase diagnostic speed and accuracy in point of care settings thus minimizing more expensive imaging studies required.
Abridge and Nuance These companies are at the forefront of conversational AI, significantly reducing the clerical burden on clinicians, with both platforms now seamlessly integrated into Epic systems. The technologies use AI to transcribe and structure medical conversations during patient visits, which helps in ensuring that crucial information is captured accurately and can be easily referenced later, reducing the 70+ hours of documentation per clinician every month.
Hippocratic AI: The product is a novel staffing marketplace where companies can “hire” auto-pilot generative AI-powered agents to conduct low-risk, non-diagnostic, patient-facing services to help solve the massive healthcare staffing crisis. The company’s vision is centered around how generative AI can provide super staffing and healthcare abundance and equity in our industry.
Glass Health: An emerging player, Glass Health uses AI to integrate various data sources to provide a holistic view of patient health, aiding in more comprehensive care planning and clinical decision support at the point of care.
Looking Forward: Embracing the Future of Medicine
AI is a strategy enabler, not a strategy in itself. Effective AI adopters in healthcare will prioritize integrated governance over isolated initiatives, using AI as a tool to support strategic endeavors and to incorporate data as a key competitive asset. While AI presents unprecedented opportunities for advancement, it also brings challenges such as data privacy concerns, the need for robust ethical frameworks to prevent bias, and the importance of maintaining the human touch in medicine. Addressing these issues is crucial as we integrate more AI tools into clinical settings. The American Academy of PAs (AAPA) has recently developed an AI task force to guide future legislation and ensure that the PA profession is safeguarded in future regulatory frameworks, and I am proud to be part of this esteemed group moving forward.
As we prepare for the upcoming annual AAPA conference in Houston, I look forward to engaging with healthcare professionals and leaders to discuss the future of medicine and AI’s role. The opportunity to hear from pioneers like Daniel Kraft, MD and leading a panel discussion on healthcare innovation will further our understanding and implementation of AI technologies.These events at the AAPA conference underscore the vibrant, dynamic nature of our profession and the central role that innovation plays in driving us forward. I am eager to share ideas with fellow thought leaders and continue pushing the boundaries of what is possible in healthcare.
As we stand on the brink of a technological revolution in healthcare, driven by artificial intelligence, our responsibilities are manifold. We must not only embrace AI and its capabilities but also guide its integration thoughtfully and ethically to enhance patient care and improve health outcomes. The promise of AI in healthcare is vast and exciting, and I am optimistic about the transformative changes we are about to witness. Let us step boldly into this future, equipped with knowledge, inspired by innovation, and committed to the betterment of patient care worldwide. Let’s not be afraid but rather be bold and embrace the evolution of technology to advance our industry and our profession.
https://www.aapa.org/news-central/2024/05/revolutionizing-healthcare-the-transformative-power-of-ai/
================
<QUESTION>
=======
As you know AI is advancing in every field, I'm bit interested in healthcare as it is my domain. How is generative AI being used in healthcare system?
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." |
To answer the following question, use only information contained in the context block/prompt. Do not use any previous knowledge or outside sources. | Whether currently available or unavailable, what is an example of a smokeless cannabis delivery method that clinical trials hope to help develop? | Three focal concerns in evaluating the medical use of marijuana are:
1. Evaluation of the effects of isolated cannabinoids;
2. Evaluation of the risks associated with the medical use of marijuana; and
3. Evaluation of the use of smoked marijuana.
EFFECTS OF ISOLATED CANNABINOIDS
Cannabinoid Biology
Much has been learned since the 1982 IOM report Marijuana and Health. Although it was clear then that most of the effects of marijuana were due to its actions on the brain, there was little information about how THC acted on brain cells (neurons), which cells were affected by THC, or even what general areas of the brain were most affected by THC. In addition, too little was known about cannabinoid physiology to offer any scientific insights into the harmful or therapeutic effects of marijuana. That all changed with the identification and characterization of cannabinoid receptors in the 1980s and 1990s. During the past 16 years, science has advanced greatly and can tell us much more about the potential medical benefits of cannabinoids.
Conclusion: At this point, our knowledge about the biology of marijuana and cannabinoids allows us to make some general conclusions:
o Cannabinoids likely have a natural role in pain modulation, control of movement, and memory.
o The natural role of cannabinoids in immune systems is likely multi-faceted and remains unclear.
o The brain develops tolerance to cannabinoids.
o Animal research demonstrates the potential for dependence, but this
potential is observed under a narrower range of conditions than with
benzodiazepines, opiates, cocaine, or nicotine.
o Withdrawal symptoms can be observed in animals but appear to be mild
compared to opiates or benzodiazepines, such as diazepam (Valium).
Conclusion: The different cannabinoid receptor types found in the body appear to play different roles in normal human physiology. In addition, some effects of cannabinoids appear to be independent of those receptors. The variety of mechanisms through which cannabinoids can influence human physiology underlies the variety of potential therapeutic uses for drugs that might act selectively on different cannabinoid systems.
Recommendation 1: Research should continue into the physiological effects of synthetic and plant-derived cannabinoids and the natural function of cannabinoids found in the body. Because different cannabinoids appear to
have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC alone.
Efficacy of Cannabinoid Drugs
The accumulated data indicate a potential therapeutic value for cannabinoid drugs, particularly for symptoms such as pain relief, control of nausea and vomiting, and appetite stimulation. The therapeutic effects of cannabinoids are best established for THC, which is generally one of the two most abundant of the cannabinoids in marijuana. (Cannabidiol is generally the other most abundant cannabinoid.)
The effects of cannabinoids on the symptoms studied are generally modest, and in most cases there are more effective medications. However, people vary in their responses to medications, and there will likely always be a subpopulation of patients who do not respond well to other medications. The combination of cannabinoid drug effects (anxiety reduction, appetite stimulation, nausea reduction, and pain relief) suggests that cannabinoids would be moderately well suited for particular conditions, such as chemotherapy-induced nausea and vomiting and AIDS wasting.
Defined substances, such as purified cannabinoid compounds, are preferable to plant products, which are of variable and uncertain composition. Use of defined cannabinoids permits a more precise evaluation of their effects, whether in combination or alone. Medications that can maximize the desired effects of cannabinoids and minimize the undesired effects can very likely be identified.
Although most scientists who study cannabinoids agree that the pathways to cannabinoid drug development are clearly marked, there is no guarantee that the fruits of scientific research will be made available to the public for medical use. Cannabinoid- based drugs will only become available if public investment in cannabinoid drug research is sustained and if there is enough incentive for private enterprise to develop and market such drugs.
Conclusion: Scientific data indicate the potential therapeutic value of cannabinoid drugs, primarily THC, for pain relief, control of nausea and vomiting, and appetite stimulation; smoked marijuana, however, is a crude THC delivery system that also delivers harmful substances.
Recommendation 2: Clinical trials of cannabinoid drugs for symptom management should be conducted with the goal of developing rapid-onset, reliable, and safe delivery systems.
Influence of Psychological Effects on Therapeutic Effects
The psychological effects of THC and similar cannabinoids pose three issues for the therapeutic use of cannabinoid drugs. First, for some patients--particularly older patients with no previous marijuana experience--the psychological effects are disturbing. Those
patients report experiencing unpleasant feelings and disorientation after being treated with THC, generally more severe for oral THC than for smoked marijuana. Second, for conditions such as movement disorders or nausea, in which anxiety exacerbates the symptoms, the antianxiety effects of cannabinoid drugs can influence symptoms indirectly. This can be beneficial or can create false impressions of the drug effect. Third, for cases in which symptoms are multifaceted, the combination of THC effects might provide a form of adjunctive therapy; for example, AIDS wasting patients would likely benefit from a medication that simultaneously reduces anxiety, pain, and nausea while stimulating appetite.
Conclusion: The psychological effects of cannabinoids, such as anxiety reduction, sedation, and euphoria can influence their potential therapeutic value. Those effects are potentially undesirable for certain patients and situations and beneficial for others. In addition, psychological effects can complicate the interpretation of other aspects of the drug's effect.
Recommendation 3: Psychological effects of cannabinoids such as anxiety reduction and sedation, which can influence medical benefits, should be evaluated in clinical trials.
RISKS ASSOCIATED WITH MEDICAL USE OF MARIJUANA
Physiological Risks
Marijuana is not a completely benign substance. It is a powerful drug with a variety of effects. However, except for the harms associated with smoking, the adverse effects of marijuana use are within the range of effects tolerated for other medications. The harmful effects to individuals from the perspective of possible medical use of marijuana are not necessarily the same as the harmful physical effects of drug abuse. When interpreting studies purporting to show the harmful effects of marijuana, it is important to keep in mind that the majority of those studies are based on smoked marijuana, and cannabinoid effects cannot be separated from the effects of inhaling smoke from burning plant material and contaminants.
For most people the primary adverse effect of acute marijuana use is diminished psychomotor performance. It is, therefore, inadvisable to operate any vehicle or potentially dangerous equipment while under the influence of marijuana, THC, or any cannabinoid drug with comparable effects. In addition, a minority of marijuana users experience dysphoria, or unpleasant feelings. Finally, the short-term immunosuppressive effects are not well established but, if they exist, are not likely great enough to preclude a legitimate medical use.
The chronic effects of marijuana are of greater concern for medical use and fall into two categories: the effects of chronic smoking and the effects of THC. Marijuana smoking is associated with abnormalities of cells lining the human respiratory tract. Marijuana smoke, like tobacco smoke, is associated with increased risk of cancer, lung
damage, and poor pregnancy outcomes. Although cellular, genetic, and human studies all suggest that marijuana smoke is an important risk factor for the development of respiratory cancer, proof that habitual marijuana smoking does or does not cause cancer awaits the results of well-designed studies.
Conclusion: Numerous studies suggest that marijuana smoke is an important risk factor in the development of respiratory disease.
Recommendation 4: Studies to define the individual health risks of smoking marijuana should be conducted, particularly among populations in which marijuana use is prevalent.
Marijuana Dependence and Withdrawal
A second concern associated with chronic marijuana use is dependence on the psychoactive effects of THC. Although few marijuana users develop dependence, some do. Risk factors for marijuana dependence are similar to those for other forms of substance abuse. In particular, anti-social personality and conduct disorders are closely associated with substance abuse.
Conclusion: A distinctive marijuana withdrawal syndrome has been identified, but it is mild and short lived. The syndrome includes restlessness, irritability, mild agitation, insomnia, sleep disturbance, nausea, and cramping.
Marijuana as a "Gateway" Drug
Patterns in progression of drug use from adolescence to adulthood are strikingly regular. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug most people encounter. Not surprisingly, most users of other illicit drugs have used marijuana first. In fact, most drug users begin with alcohol and nicotine before marijuana--usually before they are of legal age.
In the sense that marijuana use typically precedes rather than follows initiation of other illicit drug use, it is indeed a "gateway" drug. But because underage smoking and alcohol use typically precede marijuana use, marijuana is not the most common, and is rarely the first, "gateway" to illicit drug use. There is no conclusive evidence that the drug effects of marijuana are causally linked to the subsequent abuse of other illicit drugs. An important caution is that data on drug use progression cannot be assumed to apply to the use of drugs for medical purposes. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would remain the same as seen in illicit use.
Finally, there is a broad social concern that sanctioning the medical use of marijuana might increase its use among the general population. At this point there are no convincing data to support this concern. The existing data are consistent with the idea that this would
not be a problem if the medical use of marijuana were as closely regulated as other medications with abuse potential.
Conclusion: Present data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse. However, this question is beyond the issues normally considered for medical uses of drugs and should not be a factor in evaluating the therapeutic potential of marijuana or cannabinoids.
USE OF SMOKED MARIJUANA
Because of the health risks associated with smoking, smoked marijuana should generally not be recommended for long-term medical use. Nonetheless, for certain patients, such as the terminally ill or those with debilitating symptoms, the long-term risks are not of great concern. Further, despite the legal, social, and health problems associated with smoking marijuana, it is widely used by certain patient groups.
Recommendation 5: Clinical trials of marijuana use for medical purposes should be conducted under the following limited circumstances: trials should involve only short-term marijuana use (less than six months), should be conducted in patients with conditions for which there is reasonable expectation of efficacy, should be approved by institutional review boards, and should collect data about efficacy.
The goal of clinical trials of smoked marijuana would not be to develop marijuana as a licensed drug but rather to serve as a first step toward the possible development of nonsmoked rapid-onset cannabinoid delivery systems. However, it will likely be many years before a safe and effective cannabinoid delivery system, such as an inhaler, is available for patients. In the meantime there are patients with debilitating symptoms for whom smoked marijuana might provide relief. The use of smoked marijuana for those patients should weigh both the expected efficacy of marijuana and ethical issues in patient care, including providing information about the known and suspected risks of smoked marijuana use.
Recommendation 6: Short-term use of smoked marijuana (less than six months) for patients with debilitating symptoms (such as intractable pain or vomiting) must meet the following conditions:
o failure of all approved medications to provide relief has been documented,
o the symptoms can reasonably be expected to be relieved by rapid- onset cannabinoid drugs,
o such treatment is administered under medical supervision in a manner that allows for assessment of treatment effectiveness, and
o involves an oversight strategy comparable to an institutional review board process that could provide guidance within 24 hours of a
submission by a physician to provide marijuana to a patient for a specified use.
Until a nonsmoked rapid-onset cannabinoid drug delivery system becomes available, we acknowledge that there is no clear alternative for people suffering from chronic conditions that might be relieved by smoking marijuana, such as pain or AIDS wasting. One possible approach is to treat patients as n-of-1 clinical trials (single-patient trials), in which patients are fully informed of their status as experimental subjects using a harmful drug delivery system and in which their condition is closely monitored and documented under medical supervision, thereby increasing the knowledge base of the risks and benefits of marijuana use under such conditions.data, it is important to understand that decisions about drug regulation are based on a variety of moral and social considerations, as well as on medical and scientific ones.
Even when a drug is used only for medical purposes, value judgments affect policy decisions concerning its medical use. For example, the magnitude of a drug's expected medical benefit affects regulatory judgments about the acceptability of risks associated with its use. Also, although a drug is normally approved for medical use only on proof of its "safety and efficacy," patients with life-threatening conditions are sometimes (under protocols for "compassionate use") allowed access to unapproved drugs whose benefits and risks are uncertain. Value judgments play an even more substantial role in regulatory decisions concerning drugs, such as marijuana, that are sought and used for nonmedical purposes. Then policymakers must take into account not only the risks and benefits associated with medical use but also possible interactions between the regulatory arrangements governing medical use and the integrity of the legal controls set up to restrict nonmedical use.
It should be clear that many elements of drug control policy lie outside the realm of biology and medicine. Ultimately, the complex moral and social judgments that underlie drug control policy must be made by the American people and their elected officials. A goal of this report is to evaluate the biological and medical factors that should be taken into account in making those judgments.
HOW THIS STUDY WAS CONDUCTED
Information was gathered through scientific workshops, site visits, analysis of the relevant scientific literature, and extensive consultation with biomedical and social scientists. The three 2-day workshops--in Irvine, California; New Orleans, Louisiana; and Washington, D.C.--were open to the public and included scientific presentations and reports, mostly from patients and their families, about their experiences with and perspectives on the medical use of marijuana. Scientific experts in various fields were selected to talk about the latest research on marijuana, cannabinoids, and related topics (listed in Appendix B). Selection of the experts was based on recommendations by their peers, who ranked them among the most accomplished scientists and the most knowledgeable about marijuana and cannabinoids in their own fields. In addition, advocates for (John Morgan) and against (Eric A. Voth) the medical use of marijuana were invited to present scientific evidence in support of their positions.
Information presented at the scientific workshops was supplemented by analysis of the scientific literature and evaluating the methods used in various studies and the validity of the authors' conclusions. Different kinds of clinical studies are useful in different ways: results of a controlled double-blind study with adequate sample sizes can be expected to apply to the general population from which study subjects were drawn; an isolated case report can suggest further studies but cannot be presumed to be broadly applicable; and survey data can be highly informative but are generally limited by the need to rely on self-reports of drug use and on unconfirmed medical diagnoses. This report relies mainly on the most relevant and methodologically rigorous studies available and treats the results
of more limited studies cautiously. In addition, study results are presented in such a way as to allow thoughtful readers to judge the results themselves.
The Institute of Medicine (IOM) appointed a panel of nine experts to advise the study team on technical issues. These included neurology and the treatment of pain (Howard Fields); regulation of prescription drugs (J. Richard Crout); AIDS wasting and clinical trials (Judith Feinberg); treatment and pathology of multiple sclerosis (Timothy Vollmer); drug dependence among adolescents (Thomas Crowley); varieties of drug dependence (Dorothy Hatsukami); internal medicine, health care delivery, and clinical epidemiology (Eric B. Larson); cannabinoids and marijuana pharmacology (Billy R. Martin); and cannabinoid neuroscience (Steven R. Childers).
Public outreach included setting up a Web site that provided information about the study and asked for input from the public. The Web site was open for comment from November 1997 until November 1998. Some 130 organizations were invited to participate in the public workshops. Many people in the organizations--particularly those opposed to the medical use of marijuana--felt that a public forum was not conducive to expressing their views; they were invited to communicate their opinions (and reasons for holding them) by mail or telephone. As a result, roughly equal numbers of persons and organizations opposed to and in favor of the medical use of marijuana were heard from.
The study team visited four cannabis buyers' clubs in California (the Oakland
Cannabis Buyers' Cooperative, the San Francisco Cannabis Cultivators Club, the Los
Angeles Cannabis Resource Center, and Californians Helping Alleviate Medical
Problems, or CHAMPS) and two HIV/AIDS clinics (AIDS Health Care Foundation in
Los Angeles and Louisiana State University Medical Center in New Orleans). We
listened to many individual stories from the buyers' clubs about using marijuana to treat a
variety of symptoms and heard clinical observations on the use of Marinol to treat AIDS
9
patients. Marinol is the brand name for dronabinol, which is
(THC) in pill form and is available by prescription for the treatment of nausea associated with chemotherapy and AIDS wasting.
MARIJUANA TODAY
The Changing Legal Landscape
In the 20th century, marijuana has been used more for its euphoric effects than as a
medicine. Its psychological and behavioral effects have concerned public officials since
the drug first appeared in the southwestern and southern states during the first two
decades of the century. By 1931, at least 29 states had prohibited use of the drug for
3
nonmedicalpurposes. MarijuanawasfirstregulatedatthefederallevelbytheMarijuana Tax Act of 1937, which required anyone producing, distributing, or using marijuana for medical purposes to register and pay a tax and which effectively prohibited nonmedical use of the drug. Although the act did not make medical use of marijuana illegal, it did make it expensive and inconvenient. In 1942, marijuana was removed from the U.S.
-tetrahydrocannabinol
Pharmacopoeia because it was believed to be a harmful and addictive drug that caused psychoses, mental deterioration, and violent behavior.
In the late 1960s and early 1970s, there was a sharp increase in marijuana use among
adolescents and young adults. The current legal status of marijuana was established in
1970 with the passage of the Controlled Substances Act, which divided drugs into five
schedules and placed marijuana in Schedule I, the category for drugs with high potential
for abuse and no accepted medical use (see Appendix C, Scheduling Definitions). In
1972, the National Organization for the Reform of Marijuana Legislation (NORML), an
organization that supports decriminalization of marijuana, unsuccessfully petitioned the
Bureau of Narcotics and Dangerous Drugs to move marijuana from Schedule I to
Schedule II. NORML argued that marijuana is therapeutic in numerous serious ailments,
13
less toxic, and in many cases more effective than conventional medicines.
years the medical marijuana movement has been closely linked with the marijuana decriminalization movement, which has colored the debate. Many people criticized that association in their letters to IOM and during the public workshops of this study. The argument against the medical use of marijuana presented most often to the IOM study team was that "the medical marijuana movement is a Trojan horse"; that is, it is a deceptive tactic used by advocates of marijuana decriminalization who would exploit the public's sympathy for seriously ill patients.
Since NORML's petition in 1972, there have been a variety of legal decisions
concerning marijuana. From 1973 to 1978, 11 states adopted statutes that decriminalized
use of marijuana, although some of them recriminalized marijuana use in the 1980s and
1990s. During the 1970s, reports of the medical value of marijuana began to appear,
particularly claims that marijuana relieved the nausea associated with chemotherapy.
Health departments in six states conducted small studies to investigate the reports. When
the AIDS epidemic spread in the 1980s, patients found that marijuana sometimes relieved
their symptoms, most dramatically those associated with AIDS wasting. Over this period
a number of defendants charged with unlawful possession of marijuana claimed that they
were using the drug to treat medical conditions and that violation of the law was therefore
justified (the so-called medical necessity defense). Although most courts rejected these
8
Against that backdrop, voters in California and Arizona in 1996 passed two referenda
that attempted to legalize the medical use of marijuana under particular conditions. Public
support for patient access to marijuana for medical use appears substantial; public
opinion polls taken during 1997 and 1998 generally reported 60—70 percent of
15
However, those referenda are at odds with federal laws regulating marijuana, and their implementation raises
complex legal questions.
Despite the current level of interest, referenda and public discussions have not been well informed by carefully reasoned scientific debate. Although previous reports have all called for more research, the nature of the research that will be most helpful depends greatly on the specific health conditions to be addressed. And while there have been
claims, some accepted them.
respondents in favor of allowing medical uses of marijuana.
Thus, for 25
important recent advances in our understanding of the physiological effects of marijuana, few of the recent investigators have had the time or resources to permit detailed analysis. The results of those advances, only now beginning to be explored, have significant implications for the medical marijuana debate.
Several months after the passage of the California and Arizona medical marijuana referendums, the Office of National Drug Control Policy (ONDCP) asked whether IOM would conduct a scientific review of the medical value of marijuana and its constituent compounds. In August 1997, IOM formally began the study and appointed John A. Benson Jr. and Stanley J. Watson Jr. to serve as principal investigators for the study. The charge to IOM was to review the medical use of marijuana and the harms and benefits attributed to it (details are given in Appendix D).
Marijuana plants have been used since antiquity for both herbal medication and intoxication. The current debate over the medical use of marijuana is essentially a debate over the value of its medicinal properties relative to the risk posed by its use.
1,10,11
Marijuana's use as an herbal remedy before the 20th century is well documented.
However, modern medicine adheres to different standards from those used in the past.
The question is not whether marijuana can be used as an herbal remedy but rather how
well this remedy meets today's standards of efficacy and safety. We understand much
more than previous generations about medical risks. Our society generally expects its
licensed medications to be safe, reliable, and of proven efficacy; contaminants and
inconsistent ingredients in our health treatments are not tolerated. That refers not only to
prescription and over-the-counter drugs but also to vitamin supplements and herbal
remedies purchased at the grocery store. For example, the essential amino acid l-
tryptophan was widely sold in health food stores as a natural remedy for insomnia until
early 1990 when it became linked to an epidemic of a new and potentially fatal illness 9,12
When it was removed from the market shortly thereafter, there was little protest, despite the fact that it was safe for the vast majority of
the population. The 1,536 cases and 27 deaths were later traced to contaminants in a batch produced by a single Japanese manufacturer.
Although few herbal medicines meet today's standards, they have provided the foundation for modern Western pharmaceuticals. Most current prescriptions have their
7
rootseitherdirectlyorindirectlyinplantremedies. Atthesametime,mostcurrent prescriptions are synthetic compounds that are only distantly related to the natural compounds that led to their development. Digitalis was discovered in foxglove, morphine in poppies, and taxol in the yew tree. Even aspirin (acetylsalicylic acid) has its counterpart in herbal medicine: for many generations, American Indians relieved headaches by chewing the bark of the willow tree, which is rich in a related form of salicylic acid.
Although plants continue to be valuable resources for medical advances, drug development is likely to be less and less reliant on plants and more reliant on the tools of
(eosinophilia-myalgia syndrome).
modern science. Molecular biology, bioinformatics software, and DNA array-based analyses of genes and chemistry are all beginning to yield great advances in drug discovery and development. Until recently, drugs could only be discovered; now they can be designed. Even the discovery process has been accelerated through the use of modern drug-screening techniques. It is increasingly possible to identify or isolate the chemical compounds in a plant, determine which compounds are responsible for the plant's effects, and select the most effective and safe compounds--either for use as purified substances or as tools to develop even more effective, safer, or less expensive compounds.
Yet even as the modern pharmacological toolbox becomes more sophisticated and biotechnology yields an ever greater abundance of therapeutic drugs, people increasingly
4,5
In 1997, 46 percent of Americans sought
nontraditional medicines and spent over 27 billion unreimbursed dollars; the total number
of visits to alternative medicine practitioners appears to have exceeded the number of
5,6
Recent interest in the medical use of marijuana
coincides with this trend toward self-help and a search for "natural" therapies. Indeed,
several people who spoke at the IOM public hearings in support of the medical use of
marijuana said that they generally preferred herbal medicines to standard
pharmaceuticals. However, few alternative therapies have been carefully and
systematically tested for safety and efficacy, as is required for medications approved by
2
WHO USES MEDICAL MARIJUANA?
There have been no comprehensive surveys of the demographics and medical conditions of medical marijuana users, but a few reports provide some indication. In each case, survey results should be understood to reflect the situation in which they were conducted and are not necessarily characteristic of medical marijuana users as a whole. Respondents to surveys reported to the IOM study team were all members of "buyers' clubs," organizations that provide their members with marijuana, although not necessarily through direct cash transactions. The atmosphere of the marijuana buyers' clubs ranges from that of the comparatively formal and closely regulated Oakland Cannabis Buyers' Cooperative to that of a "country club for the indigent," as Denis Peron described the San Francisco Cannabis Cultivators Club (SFCCC), which he directed.
John Mendelson, an internist and pharmacologist at the University of California, San Francisco (UCSF) Pain Management Center, surveyed 100 members of the SFCCC who were using marijuana at least weekly. Most of the respondents were unemployed men in their forties. Subjects were paid $50 to participate in the survey; this might have encouraged a greater representation of unemployed subjects. All subjects were tested for drug use. About half tested positive for marijuana only; the other half tested positive for drugs in addition to marijuana (23% for cocaine and 13% for amphetamines). The predominant disorder was AIDS, followed by roughly equal numbers of members who reported chronic pain, mood disorders, and musculoskeletal disorders (Table 1.1).
seek alternative, low-technology therapies.
visits to primary care physicians.
the FDA (Food and Drug Administration).
The membership profile of the San Francisco club was similar to that of the Los Angeles Cannabis Resource Center (LACRC), where 83% of the 739 patients were men, 45% were 36—45 years old, and 71% were HIV positive. Table 1.2 shows a distribution of conditions somewhat different from that in SFCCC respondents, probably because of a different membership profile. For example, cancer is generally a disease that occurs late in life; 34 (4.7%) of LACRC members were over 55 years old; only 2% of survey respondents in the SFCCC study were over 55 years old.
Jeffrey Jones, executive director of the Oakland Cannabis Buyers' Cooperative, reported that its largest group of patients is HIV-positive men in their forties. The second- largest group is patients with chronic pain.
Among the 42 people who spoke at the public workshops or wrote to the study team, only six identified themselves as members of marijuana buyers' clubs. Nonetheless, they presented a similar profile: HIV/AIDS was the predominant disorder, followed by chronic pain (Tables 1.3 and 1.4). All HIV/AIDS patients reported that marijuana relieved nausea and vomiting and improved their appetite. About half the patients who reported using marijuana for chronic pain also reported that it reduced nausea and vomiting.
Note that the medical conditions referred to are only those reported to the study team or to interviewers; they cannot be assumed to represent complete or accurate diagnoses. Michael Rowbotham, a neurologist at the UCSF Pain Management Center, noted that many pain patients referred to that center arrive with incorrect diagnoses or with pain of unknown origin. At that center the patients who report medical benefit from marijuana say that it does not reduce their pain but enables them to cope with it.
Most--not all--people who use marijuana to relieve medical conditions have previously used it recreationally. An estimated 95% of the LACRC members had used marijuana before joining the club. It is important to emphasize the absence of comprehensive information on marijuana use before its use for medical conditions. Frequency of prior use almost certainly depends on many factors, including membership in a buyers' club, membership in a population sector that uses marijuana more often than others (for example, men 20—30 years old), and the medical condition being treated with marijuana (for example, there are probably relatively fewer recreational marijuana users among cancer patients than among AIDS patients).
Patients who reported their experience with marijuana at the public workshops said that marijuana provided them with great relief from symptoms associated with disparate diseases and ailments, including AIDS wasting, spasticity from multiple sclerosis, depression, chronic pain, and nausea associated with chemotherapy. Their circumstances and symptoms were varied, and the IOM study team was not in a position to make medical evaluations or confirm diagnoses. Three representative cases presented to the IOM study team are presented in Box 1.1; the stories have been edited for brevity, but each case is presented in the patient's words and with the patient's permission.
The variety of stories presented left the study team with a clear view of people's beliefs about how marijuana had helped them. But this collection of anecdotal data, although useful, is limited. We heard many positive stories but no stories from people who had tried marijuana but found it ineffective. This is a fraction with an unknown denominator. For the numerator we have a sample of positive responses; for the denominator we have no idea of the total number of people who have tried marijuana for medical purposes. Hence, it is impossible to estimate the clinical value of marijuana or cannabinoids in the general population based on anecdotal reports. Marijuana clearly seems to relieve some symptoms for some people--even if only as a placebo effect. But what is the balance of harmful and beneficial effects? That is the essential medical question that can be answered only by careful analysis of data collected under controlled conditions.
CANNABIS AND THE CANNABINOIDS
Marijuana is the common name for Cannabis sativa, a hemp plant that grows throughout temperate and tropical climates. The most recent review of the constituents of
16
marijuana lists 66 cannabinoids (Table 1.5).
different cannabinoid effects or interactions. Most of the cannabinoids are closely related; they fall into only 10 groups of closely related cannabinoids, many of which differ by only a single chemical moiety and might be midpoints along biochemical pathways--that
-tetrahydrocannabinol ( THC) is the primary psychoactive ingredient; depending on the particular plant, either
THC or cannabidiol is the most abundant cannabinoid in marijuana (Figure 1.1). 9
is, degradation products, precursors, or byproducts.
-
But that does not mean there are 66
16,18 9
9
Throughout this report, THC is used to indicate
of THC are discussed, the full names are used. All the cannabinoids are lipophilic--they are highly soluble in fatty fluids and tissues but not in water. Indeed, THC is so lipophilic that it is aptly described as "greasy."
Throughout this report, marijuana refers to unpurified plant extracts, including leaves and flower tops, regardless of how they are consumed--whether by ingestion or by smoking. References to the effects of marijuana should be understood to include the composite effects of its various components; that is, the effects of THC are included among the effects of marijuana, but not all the effects of marijuana are necessarily due to THC. Discussions concerning cannabinoids refer only to those particular compounds and not to the plant extract. This distinction is important; it is often blurred or exaggerated.
Cannabinoids are produced in epidermal glands on the leaves (especially the upper
ones), stems, and the bracts that support the flowers of the marijuana plant. Although the
flower itself has no epidermal glands, it has the highest cannabinoid content anywhere on
the plant, probably because of the accumulation of resin secreted by the supporting
bracteole (the small leaf-like part below the flower). The amounts of cannabinoids and
their relative abundance in a marijuana plant vary with growing conditions, including
14
). The chemical stability of cannabinoids in harvested plant material is also affected by moisture,
temperature, sunlight, and storage. They degrade under any storage condition.
humidity, temperature, and soil nutrients (reviewed in Pate, 1994
-THC. In the few cases where variants
ORGANIZATION OF THE REPORT
Throughout the report, steps that might be taken to fill the gaps in understanding both the potential harms and benefits of marijuana and cannabinoid use are identified. Those steps include identifying knowledge gaps, promising research directions, and potential therapies based on scientific advances in cannabinoid biology.
Chapter 2 reviews basic cannabinoid biology and provides a foundation to understand the medical value of marijuana or its constituent cannabinoids. In consideration of the physician's first rule, "first, do no harm," the potential harms attributed to the medical use of marijuana are reviewed before the potential medical benefits. Chapter 3 reviews the risks posed by marijuana use, with emphasis on medical use.
Chapter 4 analyzes the most credible clinical data relevant to the medical use of marijuana. It reviews what is known about the physiological mechanisms underlying particular conditions (for example, chronic pain, vomiting, anorexia, and muscle spasticity), what is known about the cellular actions of cannabinoids, and the levels of proof needed to show that marijuana is an effective treatment for specific symptoms. It does not analyze the historical literature; history is informative in enumerating uses of marijuana, but it does not provide the sort of information needed for a scientifically sound evaluation of the efficacy and safety of marijuana for clinical use. Because marijuana is advocated primarily as affording relief from the symptoms of disease rather than as a cure, this chapter is organized largely by symptoms as opposed to disease categories. Finally, chapter 4 compares the conclusions of this report with those of other recent reports on the medical use of marijuana.
Chapter 5 describes the process of and analyzes the prospects for cannabinoid drug development.
Three focal concerns in evaluating the medical use of marijuana are:
1. Evaluation of the effects of isolated cannabinoids;
2. Evaluation of the risks associated with the medical use of marijuana; and
3. Evaluation of the use of smoked marijuana.
EFFECTS OF ISOLATED CANNABINOIDS
Cannabinoid Biology
Much has been learned since the 1982 IOM report Marijuana and Health. Although it was clear then that most of the effects of marijuana were due to its actions on the brain, there was little information about how THC acted on brain cells (neurons), which cells were affected by THC, or even what general areas of the brain were most affected by THC. In addition, too little was known about cannabinoid physiology to offer any scientific insights into the harmful or therapeutic effects of marijuana. That all changed with the identification and characterization of cannabinoid receptors in the 1980s and 1990s. During the past 16 years, science has advanced greatly and can tell us much more about the potential medical benefits of cannabinoids.
Conclusion: At this point, our knowledge about the biology of marijuana and cannabinoids allows us to make some general conclusions:
o Cannabinoids likely have a natural role in pain modulation, control of movement, and memory.
o The natural role of cannabinoids in immune systems is likely multi-faceted and remains unclear.
o The brain develops tolerance to cannabinoids.
o Animal research demonstrates the potential for dependence, but this
potential is observed under a narrower range of conditions than with
benzodiazepines, opiates, cocaine, or nicotine.
o Withdrawal symptoms can be observed in animals but appear to be mild
compared to opiates or benzodiazepines, such as diazepam (Valium).
Conclusion: The different cannabinoid receptor types found in the body appear to play different roles in normal human physiology. In addition, some effects of cannabinoids appear to be independent of those receptors. The variety of mechanisms through which cannabinoids can influence human physiology underlies the variety of potential therapeutic uses for drugs that might act selectively on different cannabinoid systems.
Recommendation 1: Research should continue into the physiological effects of synthetic and plant-derived cannabinoids and the natural function of cannabinoids found in the body. Because different cannabinoids appear to
have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC alone.
Efficacy of Cannabinoid Drugs
The accumulated data indicate a potential therapeutic value for cannabinoid drugs, particularly for symptoms such as pain relief, control of nausea and vomiting, and appetite stimulation. The therapeutic effects of cannabinoids are best established for THC, which is generally one of the two most abundant of the cannabinoids in marijuana. (Cannabidiol is generally the other most abundant cannabinoid.)
The effects of cannabinoids on the symptoms studied are generally modest, and in most cases there are more effective medications. However, people vary in their responses to medications, and there will likely always be a subpopulation of patients who do not respond well to other medications. The combination of cannabinoid drug effects (anxiety reduction, appetite stimulation, nausea reduction, and pain relief) suggests that cannabinoids would be moderately well suited for particular conditions, such as chemotherapy-induced nausea and vomiting and AIDS wasting.
Defined substances, such as purified cannabinoid compounds, are preferable to plant products, which are of variable and uncertain composition. Use of defined cannabinoids permits a more precise evaluation of their effects, whether in combination or alone. Medications that can maximize the desired effects of cannabinoids and minimize the undesired effects can very likely be identified.
Although most scientists who study cannabinoids agree that the pathways to cannabinoid drug development are clearly marked, there is no guarantee that the fruits of scientific research will be made available to the public for medical use. Cannabinoid- based drugs will only become available if public investment in cannabinoid drug research is sustained and if there is enough incentive for private enterprise to develop and market such drugs.
Conclusion: Scientific data indicate the potential therapeutic value of cannabinoid drugs, primarily THC, for pain relief, control of nausea and vomiting, and appetite stimulation; smoked marijuana, however, is a crude THC delivery system that also delivers harmful substances.
Recommendation 2: Clinical trials of cannabinoid drugs for symptom management should be conducted with the goal of developing rapid-onset, reliable, and safe delivery systems.
Influence of Psychological Effects on Therapeutic Effects
The psychological effects of THC and similar cannabinoids pose three issues for the therapeutic use of cannabinoid drugs. First, for some patients--particularly older patients with no previous marijuana experience--the psychological effects are disturbing. Those
patients report experiencing unpleasant feelings and disorientation after being treated with THC, generally more severe for oral THC than for smoked marijuana. Second, for conditions such as movement disorders or nausea, in which anxiety exacerbates the symptoms, the antianxiety effects of cannabinoid drugs can influence symptoms indirectly. This can be beneficial or can create false impressions of the drug effect. Third, for cases in which symptoms are multifaceted, the combination of THC effects might provide a form of adjunctive therapy; for example, AIDS wasting patients would likely benefit from a medication that simultaneously reduces anxiety, pain, and nausea while stimulating appetite.
Conclusion: The psychological effects of cannabinoids, such as anxiety reduction, sedation, and euphoria can influence their potential therapeutic value. Those effects are potentially undesirable for certain patients and situations and beneficial for others. In addition, psychological effects can complicate the interpretation of other aspects of the drug's effect.
Recommendation 3: Psychological effects of cannabinoids such as anxiety reduction and sedation, which can influence medical benefits, should be evaluated in clinical trials.
RISKS ASSOCIATED WITH MEDICAL USE OF MARIJUANA
Physiological Risks
Marijuana is not a completely benign substance. It is a powerful drug with a variety of effects. However, except for the harms associated with smoking, the adverse effects of marijuana use are within the range of effects tolerated for other medications. The harmful effects to individuals from the perspective of possible medical use of marijuana are not necessarily the same as the harmful physical effects of drug abuse. When interpreting studies purporting to show the harmful effects of marijuana, it is important to keep in mind that the majority of those studies are based on smoked marijuana, and cannabinoid effects cannot be separated from the effects of inhaling smoke from burning plant material and contaminants.
For most people the primary adverse effect of acute marijuana use is diminished psychomotor performance. It is, therefore, inadvisable to operate any vehicle or potentially dangerous equipment while under the influence of marijuana, THC, or any cannabinoid drug with comparable effects. In addition, a minority of marijuana users experience dysphoria, or unpleasant feelings. Finally, the short-term immunosuppressive effects are not well established but, if they exist, are not likely great enough to preclude a legitimate medical use.
The chronic effects of marijuana are of greater concern for medical use and fall into two categories: the effects of chronic smoking and the effects of THC. Marijuana smoking is associated with abnormalities of cells lining the human respiratory tract. Marijuana smoke, like tobacco smoke, is associated with increased risk of cancer, lung
damage, and poor pregnancy outcomes. Although cellular, genetic, and human studies all suggest that marijuana smoke is an important risk factor for the development of respiratory cancer, proof that habitual marijuana smoking does or does not cause cancer awaits the results of well-designed studies.
Conclusion: Numerous studies suggest that marijuana smoke is an important risk factor in the development of respiratory disease.
Recommendation 4: Studies to define the individual health risks of smoking marijuana should be conducted, particularly among populations in which marijuana use is prevalent.
Marijuana Dependence and Withdrawal
A second concern associated with chronic marijuana use is dependence on the psychoactive effects of THC. Although few marijuana users develop dependence, some do. Risk factors for marijuana dependence are similar to those for other forms of substance abuse. In particular, anti-social personality and conduct disorders are closely associated with substance abuse.
Conclusion: A distinctive marijuana withdrawal syndrome has been identified, but it is mild and short lived. The syndrome includes restlessness, irritability, mild agitation, insomnia, sleep disturbance, nausea, and cramping.
Marijuana as a "Gateway" Drug
Patterns in progression of drug use from adolescence to adulthood are strikingly regular. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug most people encounter. Not surprisingly, most users of other illicit drugs have used marijuana first. In fact, most drug users begin with alcohol and nicotine before marijuana--usually before they are of legal age.
In the sense that marijuana use typically precedes rather than follows initiation of other illicit drug use, it is indeed a "gateway" drug. But because underage smoking and alcohol use typically precede marijuana use, marijuana is not the most common, and is rarely the first, "gateway" to illicit drug use. There is no conclusive evidence that the drug effects of marijuana are causally linked to the subsequent abuse of other illicit drugs. An important caution is that data on drug use progression cannot be assumed to apply to the use of drugs for medical purposes. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would remain the same as seen in illicit use.
Finally, there is a broad social concern that sanctioning the medical use of marijuana might increase its use among the general population. At this point there are no convincing data to support this concern. The existing data are consistent with the idea that this would
not be a problem if the medical use of marijuana were as closely regulated as other medications with abuse potential.
Conclusion: Present data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse. However, this question is beyond the issues normally considered for medical uses of drugs and should not be a factor in evaluating the therapeutic potential of marijuana or cannabinoids.
USE OF SMOKED MARIJUANA
Because of the health risks associated with smoking, smoked marijuana should generally not be recommended for long-term medical use. Nonetheless, for certain patients, such as the terminally ill or those with debilitating symptoms, the long-term risks are not of great concern. Further, despite the legal, social, and health problems associated with smoking marijuana, it is widely used by certain patient groups.
Recommendation 5: Clinical trials of marijuana use for medical purposes should be conducted under the following limited circumstances: trials should involve only short-term marijuana use (less than six months), should be conducted in patients with conditions for which there is reasonable expectation of efficacy, should be approved by institutional review boards, and should collect data about efficacy.
The goal of clinical trials of smoked marijuana would not be to develop marijuana as a licensed drug but rather to serve as a first step toward the possible development of nonsmoked rapid-onset cannabinoid delivery systems. However, it will likely be many years before a safe and effective cannabinoid delivery system, such as an inhaler, is available for patients. In the meantime there are patients with debilitating symptoms for whom smoked marijuana might provide relief. The use of smoked marijuana for those patients should weigh both the expected efficacy of marijuana and ethical issues in patient care, including providing information about the known and suspected risks of smoked marijuana use.
Recommendation 6: Short-term use of smoked marijuana (less than six months) for patients with debilitating symptoms (such as intractable pain or vomiting) must meet the following conditions:
o failure of all approved medications to provide relief has been documented,
o the symptoms can reasonably be expected to be relieved by rapid- onset cannabinoid drugs,
o such treatment is administered under medical supervision in a manner that allows for assessment of treatment effectiveness, and
o involves an oversight strategy comparable to an institutional review board process that could provide guidance within 24 hours of a
submission by a physician to provide marijuana to a patient for a specified use.
Until a nonsmoked rapid-onset cannabinoid drug delivery system becomes available, we acknowledge that there is no clear alternative for people suffering from chronic conditions that might be relieved by smoking marijuana, such as pain or AIDS wasting. One possible approach is to treat patients as n-of-1 clinical trials (single-patient trials), in which patients are fully informed of their status as experimental subjects using a harmful drug delivery system and in which their condition is closely monitored and documented under medical supervision, thereby increasing the knowledge base of the risks and benefits of marijuana use under such conditions.data, it is important to understand that decisions about drug regulation are based on a variety of moral and social considerations, as well as on medical and scientific ones.
Even when a drug is used only for medical purposes, value judgments affect policy decisions concerning its medical use. For example, the magnitude of a drug's expected medical benefit affects regulatory judgments about the acceptability of risks associated with its use. Also, although a drug is normally approved for medical use only on proof of its "safety and efficacy," patients with life-threatening conditions are sometimes (under protocols for "compassionate use") allowed access to unapproved drugs whose benefits and risks are uncertain. Value judgments play an even more substantial role in regulatory decisions concerning drugs, such as marijuana, that are sought and used for nonmedical purposes. Then policymakers must take into account not only the risks and benefits associated with medical use but also possible interactions between the regulatory arrangements governing medical use and the integrity of the legal controls set up to restrict nonmedical use.
It should be clear that many elements of drug control policy lie outside the realm of biology and medicine. Ultimately, the complex moral and social judgments that underlie drug control policy must be made by the American people and their elected officials. A goal of this report is to evaluate the biological and medical factors that should be taken into account in making those judgments.
HOW THIS STUDY WAS CONDUCTED
Information was gathered through scientific workshops, site visits, analysis of the relevant scientific literature, and extensive consultation with biomedical and social scientists. The three 2-day workshops--in Irvine, California; New Orleans, Louisiana; and Washington, D.C.--were open to the public and included scientific presentations and reports, mostly from patients and their families, about their experiences with and perspectives on the medical use of marijuana. Scientific experts in various fields were selected to talk about the latest research on marijuana, cannabinoids, and related topics (listed in Appendix B). Selection of the experts was based on recommendations by their peers, who ranked them among the most accomplished scientists and the most knowledgeable about marijuana and cannabinoids in their own fields. In addition, advocates for (John Morgan) and against (Eric A. Voth) the medical use of marijuana were invited to present scientific evidence in support of their positions.
Information presented at the scientific workshops was supplemented by analysis of the scientific literature and evaluating the methods used in various studies and the validity of the authors' conclusions. Different kinds of clinical studies are useful in different ways: results of a controlled double-blind study with adequate sample sizes can be expected to apply to the general population from which study subjects were drawn; an isolated case report can suggest further studies but cannot be presumed to be broadly applicable; and survey data can be highly informative but are generally limited by the need to rely on self-reports of drug use and on unconfirmed medical diagnoses. This report relies mainly on the most relevant and methodologically rigorous studies available and treats the results
of more limited studies cautiously. In addition, study results are presented in such a way as to allow thoughtful readers to judge the results themselves.
The Institute of Medicine (IOM) appointed a panel of nine experts to advise the study team on technical issues. These included neurology and the treatment of pain (Howard Fields); regulation of prescription drugs (J. Richard Crout); AIDS wasting and clinical trials (Judith Feinberg); treatment and pathology of multiple sclerosis (Timothy Vollmer); drug dependence among adolescents (Thomas Crowley); varieties of drug dependence (Dorothy Hatsukami); internal medicine, health care delivery, and clinical epidemiology (Eric B. Larson); cannabinoids and marijuana pharmacology (Billy R. Martin); and cannabinoid neuroscience (Steven R. Childers).
Public outreach included setting up a Web site that provided information about the study and asked for input from the public. The Web site was open for comment from November 1997 until November 1998. Some 130 organizations were invited to participate in the public workshops. Many people in the organizations--particularly those opposed to the medical use of marijuana--felt that a public forum was not conducive to expressing their views; they were invited to communicate their opinions (and reasons for holding them) by mail or telephone. As a result, roughly equal numbers of persons and organizations opposed to and in favor of the medical use of marijuana were heard from.
The study team visited four cannabis buyers' clubs in California (the Oakland
Cannabis Buyers' Cooperative, the San Francisco Cannabis Cultivators Club, the Los
Angeles Cannabis Resource Center, and Californians Helping Alleviate Medical
Problems, or CHAMPS) and two HIV/AIDS clinics (AIDS Health Care Foundation in
Los Angeles and Louisiana State University Medical Center in New Orleans). We
listened to many individual stories from the buyers' clubs about using marijuana to treat a
variety of symptoms and heard clinical observations on the use of Marinol to treat AIDS
9
patients. Marinol is the brand name for dronabinol, which is
(THC) in pill form and is available by prescription for the treatment of nausea associated with chemotherapy and AIDS wasting.
MARIJUANA TODAY
The Changing Legal Landscape
In the 20th century, marijuana has been used more for its euphoric effects than as a
medicine. Its psychological and behavioral effects have concerned public officials since
the drug first appeared in the southwestern and southern states during the first two
decades of the century. By 1931, at least 29 states had prohibited use of the drug for
3
nonmedicalpurposes. MarijuanawasfirstregulatedatthefederallevelbytheMarijuana Tax Act of 1937, which required anyone producing, distributing, or using marijuana for medical purposes to register and pay a tax and which effectively prohibited nonmedical use of the drug. Although the act did not make medical use of marijuana illegal, it did make it expensive and inconvenient. In 1942, marijuana was removed from the U.S.
-tetrahydrocannabinol
Pharmacopoeia because it was believed to be a harmful and addictive drug that caused psychoses, mental deterioration, and violent behavior.
In the late 1960s and early 1970s, there was a sharp increase in marijuana use among
adolescents and young adults. The current legal status of marijuana was established in
1970 with the passage of the Controlled Substances Act, which divided drugs into five
schedules and placed marijuana in Schedule I, the category for drugs with high potential
for abuse and no accepted medical use (see Appendix C, Scheduling Definitions). In
1972, the National Organization for the Reform of Marijuana Legislation (NORML), an
organization that supports decriminalization of marijuana, unsuccessfully petitioned the
Bureau of Narcotics and Dangerous Drugs to move marijuana from Schedule I to
Schedule II. NORML argued that marijuana is therapeutic in numerous serious ailments,
13
less toxic, and in many cases more effective than conventional medicines.
years the medical marijuana movement has been closely linked with the marijuana decriminalization movement, which has colored the debate. Many people criticized that association in their letters to IOM and during the public workshops of this study. The argument against the medical use of marijuana presented most often to the IOM study team was that "the medical marijuana movement is a Trojan horse"; that is, it is a deceptive tactic used by advocates of marijuana decriminalization who would exploit the public's sympathy for seriously ill patients.
Since NORML's petition in 1972, there have been a variety of legal decisions
concerning marijuana. From 1973 to 1978, 11 states adopted statutes that decriminalized
use of marijuana, although some of them recriminalized marijuana use in the 1980s and
1990s. During the 1970s, reports of the medical value of marijuana began to appear,
particularly claims that marijuana relieved the nausea associated with chemotherapy.
Health departments in six states conducted small studies to investigate the reports. When
the AIDS epidemic spread in the 1980s, patients found that marijuana sometimes relieved
their symptoms, most dramatically those associated with AIDS wasting. Over this period
a number of defendants charged with unlawful possession of marijuana claimed that they
were using the drug to treat medical conditions and that violation of the law was therefore
justified (the so-called medical necessity defense). Although most courts rejected these
8
Against that backdrop, voters in California and Arizona in 1996 passed two referenda
that attempted to legalize the medical use of marijuana under particular conditions. Public
support for patient access to marijuana for medical use appears substantial; public
opinion polls taken during 1997 and 1998 generally reported 60—70 percent of
15
However, those referenda are at odds with federal laws regulating marijuana, and their implementation raises
complex legal questions.
Despite the current level of interest, referenda and public discussions have not been well informed by carefully reasoned scientific debate. Although previous reports have all called for more research, the nature of the research that will be most helpful depends greatly on the specific health conditions to be addressed. And while there have been
claims, some accepted them.
respondents in favor of allowing medical uses of marijuana.
Thus, for 25
important recent advances in our understanding of the physiological effects of marijuana, few of the recent investigators have had the time or resources to permit detailed analysis. The results of those advances, only now beginning to be explored, have significant implications for the medical marijuana debate.
Several months after the passage of the California and Arizona medical marijuana referendums, the Office of National Drug Control Policy (ONDCP) asked whether IOM would conduct a scientific review of the medical value of marijuana and its constituent compounds. In August 1997, IOM formally began the study and appointed John A. Benson Jr. and Stanley J. Watson Jr. to serve as principal investigators for the study. The charge to IOM was to review the medical use of marijuana and the harms and benefits attributed to it (details are given in Appendix D).
Marijuana plants have been used since antiquity for both herbal medication and intoxication. The current debate over the medical use of marijuana is essentially a debate over the value of its medicinal properties relative to the risk posed by its use.
1,10,11
Marijuana's use as an herbal remedy before the 20th century is well documented.
However, modern medicine adheres to different standards from those used in the past.
The question is not whether marijuana can be used as an herbal remedy but rather how
well this remedy meets today's standards of efficacy and safety. We understand much
more than previous generations about medical risks. Our society generally expects its
licensed medications to be safe, reliable, and of proven efficacy; contaminants and
inconsistent ingredients in our health treatments are not tolerated. That refers not only to
prescription and over-the-counter drugs but also to vitamin supplements and herbal
remedies purchased at the grocery store. For example, the essential amino acid l-
tryptophan was widely sold in health food stores as a natural remedy for insomnia until
early 1990 when it became linked to an epidemic of a new and potentially fatal illness 9,12
When it was removed from the market shortly thereafter, there was little protest, despite the fact that it was safe for the vast majority of
the population. The 1,536 cases and 27 deaths were later traced to contaminants in a batch produced by a single Japanese manufacturer.
Although few herbal medicines meet today's standards, they have provided the foundation for modern Western pharmaceuticals. Most current prescriptions have their
7
rootseitherdirectlyorindirectlyinplantremedies. Atthesametime,mostcurrent prescriptions are synthetic compounds that are only distantly related to the natural compounds that led to their development. Digitalis was discovered in foxglove, morphine in poppies, and taxol in the yew tree. Even aspirin (acetylsalicylic acid) has its counterpart in herbal medicine: for many generations, American Indians relieved headaches by chewing the bark of the willow tree, which is rich in a related form of salicylic acid.
Although plants continue to be valuable resources for medical advances, drug development is likely to be less and less reliant on plants and more reliant on the tools of
(eosinophilia-myalgia syndrome).
modern science. Molecular biology, bioinformatics software, and DNA array-based analyses of genes and chemistry are all beginning to yield great advances in drug discovery and development. Until recently, drugs could only be discovered; now they can be designed. Even the discovery process has been accelerated through the use of modern drug-screening techniques. It is increasingly possible to identify or isolate the chemical compounds in a plant, determine which compounds are responsible for the plant's effects, and select the most effective and safe compounds--either for use as purified substances or as tools to develop even more effective, safer, or less expensive compounds.
Yet even as the modern pharmacological toolbox becomes more sophisticated and biotechnology yields an ever greater abundance of therapeutic drugs, people increasingly
4,5
In 1997, 46 percent of Americans sought
nontraditional medicines and spent over 27 billion unreimbursed dollars; the total number
of visits to alternative medicine practitioners appears to have exceeded the number of
5,6
Recent interest in the medical use of marijuana
coincides with this trend toward self-help and a search for "natural" therapies. Indeed,
several people who spoke at the IOM public hearings in support of the medical use of
marijuana said that they generally preferred herbal medicines to standard
pharmaceuticals. However, few alternative therapies have been carefully and
systematically tested for safety and efficacy, as is required for medications approved by
2
WHO USES MEDICAL MARIJUANA?
There have been no comprehensive surveys of the demographics and medical conditions of medical marijuana users, but a few reports provide some indication. In each case, survey results should be understood to reflect the situation in which they were conducted and are not necessarily characteristic of medical marijuana users as a whole. Respondents to surveys reported to the IOM study team were all members of "buyers' clubs," organizations that provide their members with marijuana, although not necessarily through direct cash transactions. The atmosphere of the marijuana buyers' clubs ranges from that of the comparatively formal and closely regulated Oakland Cannabis Buyers' Cooperative to that of a "country club for the indigent," as Denis Peron described the San Francisco Cannabis Cultivators Club (SFCCC), which he directed.
John Mendelson, an internist and pharmacologist at the University of California, San Francisco (UCSF) Pain Management Center, surveyed 100 members of the SFCCC who were using marijuana at least weekly. Most of the respondents were unemployed men in their forties. Subjects were paid $50 to participate in the survey; this might have encouraged a greater representation of unemployed subjects. All subjects were tested for drug use. About half tested positive for marijuana only; the other half tested positive for drugs in addition to marijuana (23% for cocaine and 13% for amphetamines). The predominant disorder was AIDS, followed by roughly equal numbers of members who reported chronic pain, mood disorders, and musculoskeletal disorders (Table 1.1).
seek alternative, low-technology therapies.
visits to primary care physicians.
the FDA (Food and Drug Administration).
The membership profile of the San Francisco club was similar to that of the Los Angeles Cannabis Resource Center (LACRC), where 83% of the 739 patients were men, 45% were 36—45 years old, and 71% were HIV positive. Table 1.2 shows a distribution of conditions somewhat different from that in SFCCC respondents, probably because of a different membership profile. For example, cancer is generally a disease that occurs late in life; 34 (4.7%) of LACRC members were over 55 years old; only 2% of survey respondents in the SFCCC study were over 55 years old.
Jeffrey Jones, executive director of the Oakland Cannabis Buyers' Cooperative, reported that its largest group of patients is HIV-positive men in their forties. The second- largest group is patients with chronic pain.
Among the 42 people who spoke at the public workshops or wrote to the study team, only six identified themselves as members of marijuana buyers' clubs. Nonetheless, they presented a similar profile: HIV/AIDS was the predominant disorder, followed by chronic pain (Tables 1.3 and 1.4). All HIV/AIDS patients reported that marijuana relieved nausea and vomiting and improved their appetite. About half the patients who reported using marijuana for chronic pain also reported that it reduced nausea and vomiting.
Note that the medical conditions referred to are only those reported to the study team or to interviewers; they cannot be assumed to represent complete or accurate diagnoses. Michael Rowbotham, a neurologist at the UCSF Pain Management Center, noted that many pain patients referred to that center arrive with incorrect diagnoses or with pain of unknown origin. At that center the patients who report medical benefit from marijuana say that it does not reduce their pain but enables them to cope with it.
Most--not all--people who use marijuana to relieve medical conditions have previously used it recreationally. An estimated 95% of the LACRC members had used marijuana before joining the club. It is important to emphasize the absence of comprehensive information on marijuana use before its use for medical conditions. Frequency of prior use almost certainly depends on many factors, including membership in a buyers' club, membership in a population sector that uses marijuana more often than others (for example, men 20—30 years old), and the medical condition being treated with marijuana (for example, there are probably relatively fewer recreational marijuana users among cancer patients than among AIDS patients).
Patients who reported their experience with marijuana at the public workshops said that marijuana provided them with great relief from symptoms associated with disparate diseases and ailments, including AIDS wasting, spasticity from multiple sclerosis, depression, chronic pain, and nausea associated with chemotherapy. Their circumstances and symptoms were varied, and the IOM study team was not in a position to make medical evaluations or confirm diagnoses. Three representative cases presented to the IOM study team are presented in Box 1.1; the stories have been edited for brevity, but each case is presented in the patient's words and with the patient's permission.
The variety of stories presented left the study team with a clear view of people's beliefs about how marijuana had helped them. But this collection of anecdotal data, although useful, is limited. We heard many positive stories but no stories from people who had tried marijuana but found it ineffective. This is a fraction with an unknown denominator. For the numerator we have a sample of positive responses; for the denominator we have no idea of the total number of people who have tried marijuana for medical purposes. Hence, it is impossible to estimate the clinical value of marijuana or cannabinoids in the general population based on anecdotal reports. Marijuana clearly seems to relieve some symptoms for some people--even if only as a placebo effect. But what is the balance of harmful and beneficial effects? That is the essential medical question that can be answered only by careful analysis of data collected under controlled conditions.
CANNABIS AND THE CANNABINOIDS
Marijuana is the common name for Cannabis sativa, a hemp plant that grows throughout temperate and tropical climates. The most recent review of the constituents of
16
marijuana lists 66 cannabinoids (Table 1.5).
different cannabinoid effects or interactions. Most of the cannabinoids are closely related; they fall into only 10 groups of closely related cannabinoids, many of which differ by only a single chemical moiety and might be midpoints along biochemical pathways--that
-tetrahydrocannabinol ( THC) is the primary psychoactive ingredient; depending on the particular plant, either
THC or cannabidiol is the most abundant cannabinoid in marijuana (Figure 1.1). 9
is, degradation products, precursors, or byproducts.
-
But that does not mean there are 66
16,18 9
9
Throughout this report, THC is used to indicate
of THC are discussed, the full names are used. All the cannabinoids are lipophilic--they are highly soluble in fatty fluids and tissues but not in water. Indeed, THC is so lipophilic that it is aptly described as "greasy."
Throughout this report, marijuana refers to unpurified plant extracts, including leaves and flower tops, regardless of how they are consumed--whether by ingestion or by smoking. References to the effects of marijuana should be understood to include the composite effects of its various components; that is, the effects of THC are included among the effects of marijuana, but not all the effects of marijuana are necessarily due to THC. Discussions concerning cannabinoids refer only to those particular compounds and not to the plant extract. This distinction is important; it is often blurred or exaggerated.
Cannabinoids are produced in epidermal glands on the leaves (especially the upper
ones), stems, and the bracts that support the flowers of the marijuana plant. Although the
flower itself has no epidermal glands, it has the highest cannabinoid content anywhere on
the plant, probably because of the accumulation of resin secreted by the supporting
bracteole (the small leaf-like part below the flower). The amounts of cannabinoids and
their relative abundance in a marijuana plant vary with growing conditions, including
14
). The chemical stability of cannabinoids in harvested plant material is also affected by moisture,
temperature, sunlight, and storage. They degrade under any storage condition.
humidity, temperature, and soil nutrients (reviewed in Pate, 1994
-THC. In the few cases where variants
ORGANIZATION OF THE REPORT
Throughout the report, steps that might be taken to fill the gaps in understanding both the potential harms and benefits of marijuana and cannabinoid use are identified. Those steps include identifying knowledge gaps, promising research directions, and potential therapies based on scientific advances in cannabinoid biology.
Chapter 2 reviews basic cannabinoid biology and provides a foundation to understand the medical value of marijuana or its constituent cannabinoids. In consideration of the physician's first rule, "first, do no harm," the potential harms attributed to the medical use of marijuana are reviewed before the potential medical benefits. Chapter 3 reviews the risks posed by marijuana use, with emphasis on medical use.
Chapter 4 analyzes the most credible clinical data relevant to the medical use of marijuana. It reviews what is known about the physiological mechanisms underlying particular conditions (for example, chronic pain, vomiting, anorexia, and muscle spasticity), what is known about the cellular actions of cannabinoids, and the levels of proof needed to show that marijuana is an effective treatment for specific symptoms. It does not analyze the historical literature; history is informative in enumerating uses of marijuana, but it does not provide the sort of information needed for a scientifically sound evaluation of the efficacy and safety of marijuana for clinical use. Because marijuana is advocated primarily as affording relief from the symptoms of disease rather than as a cure, this chapter is organized largely by symptoms as opposed to disease categories. Finally, chapter 4 compares the conclusions of this report with those of other recent reports on the medical use of marijuana.
Chapter 5 describes the process of and analyzes the prospects for cannabinoid drug development.
Primum non nocere. This is the physician's first rule: whatever treatment a physician prescribes to a patient--first, that treatment must not harm the patient.
The most contentious aspect of the medical marijuana debate is not whether marijuana can alleviate particular symptoms but rather the degree of harm associated with its use. This chapter explores the negative health consequences of marijuana use, first with respect to drug abuse, then from a psychological perspective, and finally from a physiological perspective.
THE MARIJUANA "HIGH"
The most commonly reported effects of smoked marijuana are a sense of well-being or
euphoria and increased talkativeness and laughter alternating with periods of
introspective dreaminess followed by lethargy and sleepiness (see reviews by Adams and
1 59 60
Martin, 1996, Hall and Solowij, and Hall et al. ). A characteristic feature of a marijuana "high" is a distortion in the sense of time associated with deficits in short-term memory and learning. A marijuana smoker typically has a sense of enhanced physical and emotional sensitivity, including a feeling of greater interpersonal closeness. The most obvious behavioral abnormality displayed by someone under the influence of marijuana is difficulty in carrying on an intelligible conversation, perhaps because of an inability to remember what was just said even a few words earlier.
The high associated with marijuana is not generally claimed to be integral to its therapeutic value. But mood enhancement, anxiety reduction, and mild sedation can be desirable qualities in medications--particularly for patients suffering pain and anxiety. Thus, although the psychological effects of marijuana are merely side effects in the treatment of some symptoms, they might contribute directly to relief of other symptoms. They also must be monitored in controlled clinical trials to discern which effect of cannabinoids is beneficial. These possibilities are discussed later under the discussions of specific symptoms in chapter 4.
The effects of various doses and routes of delivery of THC are shown in Table 3.1.
Adverse Mood Reactions
Although euphoria is the more common reaction to smoking marijuana, adverse mood
reactions can occur. Such reactions occur most frequently in inexperienced users after
large doses of smoked or oral marijuana. They usually disappear within hours and
respond well to reassurance and a supportive environment. Anxiety and paranoia are the
59
most common acute adverse reactions;
others include panic, depression, dysphoria, 1,40,66,69
depersonalization, delusions, illusions, and hallucinations.
Of regular marijuana
smokers, 17% report that they have experienced at least one of the symptoms, usually
145
early in their use of marijuana.
of medical marijuana in people who have not previously used marijuana.
DRUG DYNAMICS
There are many misunderstandings about drug abuse and dependence (see reviews by
114
54
Those observations are particularly relevant for the use
O'Brien themostrecentDiagnosticandStatisticalManualofMentalDisorders(DSM-IV), the most influential system in the United States for diagnoses of mental disorders, including substance abuse (see Box 3.1). Tolerance, dependence, and withdrawal are often presumed to imply abuse or addiction, but this is not the case. Tolerance and dependence are normal physiological adaptations to repeated use of any drug. The correct use of prescribed medications for pain, anxiety, and even hypertension commonly produces tolerance and some measure of physiological dependence.
Even a patient who takes a medicine for appropriate medical indications and at the correct dosage can develop tolerance, physical dependence, and withdrawal symptoms if the drug is stopped abruptly rather than gradually. For example, a hypertensive patient receiving a beta-adrenergic receptor blocker, such as propranolol, might have a good therapeutic response; but if the drug is stopped abruptly, there can be a withdrawal syndrome that consists of tachycardia and a rebound increase in blood pressure to a point that is temporarily higher than before administration of the medication began.
Because it is an illegal substance, some people consider any use of marijuana as
substance abuse. However, this report uses the medical definition; that is, substance
abuse is a maladaptive pattern of repeated substance use manifested by recurrent and
3
significantadverseconsequences. Substanceabuseanddependencearebothdiagnoses of pathological substance use. Dependence is the more serious diagnosis and implies compulsive drug use that is difficult to stop despite significant substance-related problems (see Box 3.2).
Reinforcement
Drugs vary in their ability to produce good feelings in users, and the more strongly
reinforcing a drug is, the more likely it will be abused (G. Koob, Institute of Medicine
(IOM) workshop). Marijuana is indisputably reinforcing for many people. The
reinforcing properties of even so mild a stimulant as caffeine are typical of reinforcement
54
in 1994). Caffeine is reinforcing for many people at low doses (100—200 mg, the average amount of caffeine in one to two cups of
and Goldstein
). The terms and concepts used in this report are as defined in
3
by addicting drugs (reviewed by Goldstein
coffee) and is aversive at high doses (600 mg, the average amount of caffeine in six cups of coffee). The reinforcing effects of many drugs are different for different people. For example, caffeine was most reinforcing for test subjects who scored lowest on tests of anxiety but tended not to be reinforcing for the most anxious subjects.
As an argument to dispute the abuse potential of marijuana, some have cited the
observation that animals do not willingly self-administer THC, as they will cocaine. Even
if that were true, it would not be relevant to human use of marijuana. The value in animal
models of drug self-administration is not that they are necessary to show that a drug is
reinforcing but rather that they provide a model in which the effects of a drug can be
studied. Furthermore, THC is indeed rewarding to animals at some doses but, like many
93
reinforcing drugs, is aversive at high doses (4.0 mg/kg).
in experiments conducted in animals outfitted with intravenous catheters that allow them
100
A specific set of neural pathways has been proposed to be a "reward system" that
51
to self-administer WIN 55,212, a drug that mimics the effects of THC.
underlies the reinforcement of drugs of abuse and other pleasurable stimuli.
properties of drugs are associated with their ability to increase concentrations of particular neurotransmitters in areas that are part of the proposed brain reward system. The median forebrain bundle and the nucleus accumbens are associated with brain reward
88
144
Cocaine, amphetamine, alcohol, opioids, nicotine, and THC
extracellular fluid dopamine in the nucleus accumbens region (reviewed by Koob and Le
pathways.
all increase
88
110
Moal
brain reward systems are not strictly "drug reinforcement centers." Rather, their biological role is to respond to a range of positive stimuli, including sweet foods and sexual attraction.
Tolerance
The rate at which tolerance to the various effects of any drug develops is an important consideration for its safety and efficacy. For medical use, tolerance to some effects of cannabinoids might be desirable. Differences in the rates at which tolerance to the multiple effects of a drug develops can be dangerous. For example, tolerance to the euphoric effects of heroin develops faster than tolerance to its respiratory depressant effects, so heroin users tend to increase their daily doses to reach their desired level of euphoria, thereby putting themselves at risk for respiratory arrest. Because tolerance to the various effects of cannabinoids might develop at different rates, it is important to evaluate independently their effects on mood, motor performance, memory, and attention, as well as any therapeutic use under investigation.
Tolerance to most of the effects of marijuana can develop rapidly after only a few
doses, and it also disappears rapidly. Tolerance to large doses has been found to persist in
experimental animals for long periods after cessation of drug use. Performance
impairment is less among people who use marijuana heavily than it is among those who 29,104,124
and Nestler and Aghajanian
in 1997). However, it is important to note that
possibly because of tolerance. Heavy users tend to reach higher plasma concentrations of THC than light users after similar doses of
use marijuana only occasionally,
Similar effects have been found
Reinforcing
THC, arguing against the possibility that heavy users show less performance impairment because they somehow absorb less THC (perhaps due to differences in smoking
95
There appear to be variations in the development of tolerance to the different effects
of marijuana and oral THC. For example, daily marijuana smokers participated in a
residential laboratory study to compare the development of tolerance to THC pills and to 61,62
behavior).
smoked marijuana.
day for four consecutive days; another group was given THC pills on the same schedule. During the four-day period, both groups became tolerant to feeling "high" and what they reported as a "good drug effect." In contrast, neither group became tolerant to the stimulatory effects of marijuana or THC on appetite. "Tolerance" does not mean that the drug no longer produced the effects but simply that the effects were less at the end than at the beginning of the four-day period. The marijuana smoking group reported feeling "mellow" after smoking and did not show tolerance to this effect; the group that took THC pills did not report feeling "mellow." The difference was also reported by many people who described their experiences to the IOM study team.
The oral and smoked doses were designed to deliver roughly equivalent amounts of THC to a subject. Each smoked marijuana dose consisted of five 10-second puffs of a marijuana cigarette containing 3.1% THC; the pills contained 30 mg of THC. Both groups also received placebo drugs during other four-day periods. Although the dosing of the two groups was comparable, different routes of administration resulted in different patterns of drug effect. The peak effect of smoked marijuana is usually felt within
68,95
One group was given marijuana cigarettes to smoke four times per
minutes and declines sharply after 30 minutes
not felt until about an hour and lasts for several hours.
Withdrawal
A distinctive marijuana and THC withdrawal syndrome has been identified, but it is mild and subtle compared with the profound physical syndrome of alcohol or heroin
withdrawal.
The symptoms of marijuana withdrawal include restlessness, irritability,
31,74
mild agitation, insomnia, sleep EEG disturbance, nausea, and cramping (Table 3.2). In
addition to those symptoms, two recent studies noted several more. A group of
adolescents under treatment for conduct disorders also reported fatigue and illusions or
hallucinations after marijuana abstinence (this study is discussed further in the section on
31
In a residential study of daily marijuana users, withdrawal symptoms included sweating and
62
A marijuana withdrawal syndrome, however, has been reported only in a group of adolescents in treatment for substance
31
; the peak effect of oral THC is usually
118
"Prevalence and Predictors of Dependence on Marijuana and Other Drugs").
runny nose, in addition to those listed above.
abuse problems 62,74
daily.
and in a research setting where subjects were given marijuana or THC
Withdrawal symptoms have been observed in carefully controlled laboratory studies 61,62
of people after use of both oral THC and smoked marijuana.
were given very high doses of oral THC: 180—210 mg per day for 10—20 days, roughly
In one study, subjects
equivalent to smoking 9—10 2% THC cigarettes per day.
During the abstinence period
at the end of the study, the study subjects were irritable and showed insomnia, runny
nose, sweating, and decreased appetite. The withdrawal symptoms, however, were short
lived. In four days they had abated. The time course contrasts with that in another study
in which lower doses of oral THC were used (80—120 mg/day for four days) and
61,62
withdrawal symptoms were still near maximal after four days.
In animals, simply discontinuing chronic heavy dosing of THC does not reveal
withdrawal symptoms, but the "removal" of THC from the brain can be made abrupt by
another drug that blocks THC at its receptor if administered when the chronic THC is
withdrawn. The withdrawal syndrome is pronounced, and the behavior of the animals
153
becomes hyperactive and disorganized. 16,24
The half-life of THC in brain is about an Although traces of THC can remain in the brain for much longer periods, the
hour.
amounts are not physiologically significant. Thus, the lack of a withdrawal syndrome when THC is abruptly withdrawn without administration of a receptor-blocking drug is probably not due to a prolonged decline in brain concentrations.
Craving
Craving, the intense desire for a drug, is the most difficult aspect of addiction to overcome. Research on craving has focused on nicotine, alcohol, cocaine, and opiates but
115
has not specifically addressed marijuana.
is known about drug craving, its relevance to marijuana use has not been established.
Most people who suffer from addiction relapse within a year of abstinence, and they
58
As addiction develops, craving increases even as maladaptive consequences accumulate. Animal studies indicate that the tendency to
relapse is based on changes in brain function that continue for months or years after the
115
last use of the drug.
manifestation of an abstinence syndrome remains an unanswered question in drug abuse
88
The "liking" of sweet foods, for example, is mediated by opioid forebrain systems and by brain stem systems, whereas "wanting" seems to be mediated by
109
Anticraving medications have been developed for nicotine and alcohol. The antidepressant, bupropion, blocks nicotine craving, while naltrexone blocks alcohol
115
often attribute their relapse to craving.
research.
ascending dopamine neurons that project to the nucleus accumbens.
Another category of addiction medication includes drugs that block other drugs' effects. Some of those drugs also block craving. For example, methadone blocks the euphoric effects of heroin and also reduces craving.
MARIJUANA USE AND DEPENDENCE
Prevalence of Use
Millions of Americans have tried marijuana, but most are not regular users. In 1996, 68.6 million people--32% of the U.S. population over 12 years old--had tried marijuana
craving.
74
Thus, while this section briefly reviews what
Whether neurobiological conditions change during the
132
or hashish at least once in their lifetime, but only 5% were current users.
is most prevalent among 18- to 25-year-olds and declines sharply after the age of 34
77,132
(Figure 3.1).
although the difference decreases by adulthood.
Whites are more likely than blacks to use marijuana in adolescence,
132
Marijuana use
Most people who have used marijuana did so first during adolescence. Social influences, such as peer pressure and prevalence of use by peers, are highly predictive of
9
initiationintomarijuanause. Initiationisnot,ofcourse,synonymouswithcontinuedor
regular use. A cohort of 456 students who experimented with marijuana during their high
school years were surveyed about their reasons for initiating, continuing, and stopping
9
theirmarijuanause. Studentswhobeganasheavyuserswereexcludedfromthe
analysis. Those who did not become regular marijuana users cited two types of reasons
for discontinuing. The first was related to health and well-being; that is, they felt that
marijuana was bad for their health or for their family and work relationships. The second
type was based on age-related changes in circumstances, including increased
responsibility and decreased regular contact with other marijuana users. Among high
school students who quit, parental disapproval was a stronger influence than peer
disapproval in discontinuing marijuana use. In the initiation of marijuana use, the reverse
was true. The reasons cited by those who continued to use marijuana were to "get in a
better mood or feel better." Social factors were not a significant predictor of continued
use. Data on young adults show similar trends. Those who use drugs in response to social
influences are more likely to stop using them than those who also use them for
80
The age distribution of marijuana users among the general population contrasts with that of medical marijuana users. Marijuana use generally declines sharply after the age of 34 years, whereas medical marijuana users tend to be over 35. That raises the question of what, if any, relationship exists between abuse and medical use of marijuana; however, no studies reported in the scientific literature have addressed this question.
Prevalence and Predictors of Dependence on Marijuana and Other Drugs
Many factors influence the likelihood that a particular person will become a drug abuser or an addict; the user, the environment, and the drug are all important factors
114
(Table 3.3).
people who are vulnerable to drug abuse for individual reasons and who find themselves in an environment that encourages drug abuse are initially likely to abuse the most readily available drug--regardless of its unique set of effects on the brain.
The third category includes drug-specific effects that influence the abuse liability of a particular drug. As discussed earlier in this chapter, the more strongly reinforcing a drug is, the more likely that it will be abused. The abuse liability of a drug is enhanced by how quickly its effects are felt, and this is determined by how the drug is delivered. In general, the effects of drugs that are inhaled or injected are felt within minutes, and the effects of drugs that are ingested take a half hour or more.
psychological reasons.
The first two categories apply to potential abuse of any substance; that is,
The proportion of people who become addicted varies among drugs. Table 3.4 shows estimates for the proportion of people among the general population who used or became dependent on different types of drugs. The proportion of users that ever became dependent includes anyone who was ever dependent--whether it was for a period of weeks or years--and thus includes more than those who are currently dependent. Compared to most other drugs listed in this table, dependence among marijuana users is relatively rare. This might be due to differences in specific drug effects, the availability of or penalties associated with the use of the different drugs, or some combination.
Daily use of most illicit drugs is extremely rare in the general population. In 1989, daily use of marijuana among high school seniors was less than that of alcohol (2.9% and
76
Drug dependence is more prevalent in some sectors of the population than in others.
8
Age,gender,andraceorethnicgroupareallimportant. Excludingtobaccoandalcohol,
8
thefollowingtrendsofdrugdependencearestatisticallysignificant: Menare1.6times as likely than women to become drug dependent, non-Hispanic whites are about twice as likely as blacks to become drug dependent (the difference between non-Hispanic and Hispanic whites was not significant), and people 25—44 years old are more than three times as likely as those over 45 years old to become drug dependent.
More often than not, drug dependence co-occurs with other psychiatric disorders. Most people with a diagnosis of drug dependence disorder also have a diagnosis of a
76
The most frequent co- occurring disorder is alcohol abuse; 60% of men and 30% of women with a diagnosis of
drug dependence also abuse alcohol. In women who are drug dependent, phobic disorders and major depression are almost equally common (29% and 28%, respectively). Note that this study distinguished only between alcohol, nicotine and "other drugs"; marijuana was grouped among "other drugs." The frequency with which drug dependence and other psychiatric disorders co-occur might not be the same for marijuana and other drugs that were included in that category.
A strong association between drug dependence and antisocial personality or its precursor, conduct disorder, is also widely reported in children and adults (reviewed in
126
). Although the causes of the association are uncertain, Robins recently concluded that it is more likely that conduct disorders generally lead to substance abuse
126
Such a trend might, however, depend on the age at which the conduct disorder is manifested.
A longitudinal study by Brooks and co-workers noted a significant relationship
between adolescent drug use and disruptive disorders in young adulthood; except for
earlier psychopathology, such as childhood conduct disorder, the drug use preceded the
18
In contrast with use of other illicit drugs and tobacco, moderate (less than once a week and more than once a month) to heavy marijuana use did not
predict anxiety or depressive disorders; but it was similar to those other drugs in predicting antisocial personality disorder. The rates of disruptive disorders increased with
4.2%, respectively).
another psychiatric disorder (76% of men and 65% of women).
1998 by Robins
than the reverse.
psychiatric disorders.
increased drug use. Thus, heavy drug use among adolescents can be a warning sign for later psychiatric disorders; whether it is an early manifestation of or a cause of those disorders remains to be determined.
Psychiatric disorders are more prevalent among adolescents who use drugs--including
79
alcohol and nicotine--than among those who do not.
Table 3.5 indicates that adolescent
boys who smoke cigarettes daily are about 10 times as likely to have a psychiatric
disorder diagnosis as those who do not smoke. However, the table does not compare
intensity of use among the different drug classes. Thus, although daily cigarette smoking
among adolescent boys is more strongly associated with psychiatric disorders than is any
use of illicit substances, it does not follow that this comparison is true for every amount
79
Few marijuana users become dependent on it (Table 3.4), but those who do encounter 19,143
of cigarette smoking.
problems similar to those associated with dependence on other drugs.
appears to be less severe among people who use only marijuana than among those who
19,143
abuse cocaine or those who abuse marijuana with other drugs (including alcohol).
Data gathered in 1990—1992 from the National Comorbidity Study of over 8,000 persons 15—54 years old indicate that 4.2% of the general population were dependent on
8
marijuanaatsometime. Similarresultsforthefrequencyofsubstanceabuseamongthe general population were obtained from the Epidemiological Catchment Area Program, a survey of over 19,000 people. According to data collected in the early 1980s for that study, 4.4% of adults have, at one time, met the criteria for marijuana dependence. In comparison, 13.8% of adults met the criteria for alcohol dependence and 36.0% for tobacco dependence. After alcohol and nicotine, marijuana was the substance most frequently associated with a diagnosis of substance dependence.
In a 15-year study begun in 1979, 7.3% of 1,201 adolescents and young adults in
suburban New Jersey at some time met the criteria for marijuana dependence; this
indicates that the rate of marijuana dependence might be even higher in some groups of
71
Adolescents meet the criteria for drug dependence at lower rates of marijuana use than do adults, and this
25
adolescents and young adults than in the general population.
suggests that they are more vulnerable to dependence than adults
(see Box 3.2).
Dependence
Youths who are already dependent on other substances are particularly vulnerable to
31
marijuana dependence. For example, Crowley and co-workers
229 adolescent patients in a residential treatment program for delinquent, substance- involved youth and found that those patients were dependent on an average of 3.2 substances. The adolescents had previously been diagnosed as dependent on at least one substance (including nicotine and alcohol) and had three or more conduct disorder symptoms during their life. About 83% of those who had used marijuana at least six times went on to develop marijuana dependence. About equal numbers of youths in the study had a diagnosis of marijuana dependence and a diagnosis of alcohol dependence; fewer were nicotine dependent. Comparisons of dependence potential between different drugs should be made cautiously. The probability that a particular drug will be abused is
interviewed a group of
influenced by many factors, including the specific drug effects and availability of the drug.
Although parents often state that marijuana caused their children to be rebellious, the
troubled adolescents in the study by Crowley and co-workers developed conduct
disorders before marijuana abuse. That is consistent with reports that the more symptoms 127
of conduct disorders children have, the younger they begin drug abuse, earlier they begin drug use, the more likely it is to be followed by abuse or
125
Genetic factors are known to play a role in the likelihood of abuse for drugs other than 7,129
dependence.
marijuana,
and it is not unexpected that genetic factors play a role in the marijuana
experience, including the likelihood of abuse. A study of over 8,000 male twins listed in
the Vietnam Era Twin Registry indicated that genes have a statistically significant
97
influence on whether a person finds the effects of marijuana pleasant.
Not surprisingly,
people who found marijuana to be pleasurable used it more often than those who found it
unpleasant. The study suggested that, although social influences play an important role in
the initiation of use, individual differences--perhaps associated with the brain's reward
system--influence whether a person will continue using marijuana. Similar results were
86
Family and social environment strongly influenced the likelihood of ever using marijuana but had little effect on the likelihood of heavy use or
abuse. The latter were more influenced by genetic factors. Those results are consistent with the finding that the degree to which rats find THC rewarding is genetically based.
In summary, although few marijuana users develop dependence, some do. But they appear to be less likely to do so than users of other drugs (including alcohol and nicotine), and marijuana dependence appears to be less severe than dependence on other drugs. Drug dependence is more prevalent in some sectors of the population than others, but no group has been identified as particularly vulnerable to the drug-specific effects of marijuana. Adolescents, especially troubled ones, and people with psychiatric disorders (including substance abuse) appear to be more likely than the general population to become dependent on marijuana.
If marijuana or cannabinoid drugs were approved for therapeutic uses, it would be important to consider the possibility of dependence, particularly for patients at high risk for substance dependence. Some controlled substances that are approved medications produce dependence after long-term use; this, however, is a normal part of patient management and does not generally present undue risk to the patient.
Progression from Marijuana to Other Drugs
The fear that marijuana use might cause, as opposed to merely precede, the use of drugs that are more harmful is of great concern. To judge from comments submitted to the IOM study team, it appears to be of greater concern than the harms directly related to marijuana itself. The discussion that marijuana is a "gateway" drug implicitly recognizes that other illicit drugs might inflict greater damage to health or social relations than
found in a study of female twins.
and that the
92
marijuana. Although the scientific literature generally discusses drug use progression between a variety of drug classes, including alcohol and tobacco, the public discussion has focused on marijuana as a "gateway" drug that leads to abuse of more harmful illicit drugs, such as cocaine and heroin.
There are strikingly regular patterns in the progression of drug use from adolescence
to adulthood. Because it is the most widely used illicit drug, marijuana is predictably the
first illicit drug that most people encounter. Not surprisingly, most users of other illicit 81,82
drugs used marijuana first.
marijuana--they begin with alcohol and nicotine, usually when they are too young to do
82,90
so legally.
The gateway analogy evokes two ideas that are often confused. The first, more often referred to as the "stepping stone" hypothesis, is the idea that progression from marijuana
82
In fact, most drug users do not begin their drug use with
to other drugs arises from pharmacological properties of marijuana itself.
that marijuana serves as a gateway to the world of illegal drugs in which youths have greater opportunity and are under greater social pressure to try other illegal drugs. The latter interpretation is most often used in the scientific literature, and it is supported, although not proven, by the available data.
The stepping stone hypothesis applies to marijuana only in the broadest sense. People who enjoy the effects of marijuana are, logically, more likely to be willing to try other mood-altering drugs than are people who are not willing to try marijuana or who dislike its effects. In other words, many of the factors associated with a willingness to use marijuana are, presumably, the same as those associated with a willingness to use other illicit drugs. Those factors include physiological reactions to the drug effect, which are consistent with the stepping stone hypothesis, but also psychosocial factors, which are independent of drug-specific effects. There is no evidence that marijuana serves as a stepping stone on the basis of its particular physiological effect. One might argue that marijuana is generally used before other illicit mood-altering drugs, in part, because its effects are milder; in that case, marijuana is a stepping stone only in the same sense as taking a small dose of a particular drug and then increasing that dose over time is a stepping stone to increased drug use.
Whereas the stepping stone hypothesis presumes a predominantly physiological
component of drug progression, the gateway theory is a social theory. The latter does not
suggest that the pharmacological qualities of marijuana make it a risk factor for
progression to other drug use. Instead, the legal status of marijuana makes it a gateway
82
Psychiatric disorders are associated with substance dependence and are probably risk factors for progression in drug use. For example, the troubled adolescents studied by
31
were dependent on an average of 3.2 substances, and this suggests that their conduct disorders were associated with increased risk of progressing
from one drug to another. Abuse of a single substance is probably also a risk factor for later multiple drug use. For example, in a longitudinal study that examined drug use and
drug.
Crowley and co-workers
The second is
dependence, about 26% of problem drinkers reported that they first used marijuana after the onset of alcohol-related problems (R. Pandina, IOM workshop). The study also found that 11% of marijuana users developed chronic marijuana problems; most also had alcohol problems.
Intensity of drug use is an important risk factor in progression. Daily marijuana users are more likely than their peers to be extensive users of other substances (for review, see
78
Kandel and Davies
by the age 24—25, 75% never used any other illicit drug; 53% of those who had used it
78
The factors that best predict use of illicit drugs other than marijuana are probably the
following: age of first alcohol or nicotine use, heavy marijuana use, and psychiatric
disorders. However, progression to illicit drug use is not synonymous with heavy or
persistent drug use. Indeed, although the age of onset of use of licit drugs (alcohol and
nicotine) predicts later illicit drug use, it does not appear to predict persistent or heavy 90
use of illicit drugs.
Data on the gateway phenomenon are often overinterpreted. For example, one study
55
). Of 34- to 35-year- old men who had used marijuana 10—99 times
more than 100 times did progress to using other illicit drugs 10 or more times. Comparable proportions for women are 64% and 50%.
reports that "marijuana's role as a gateway drug appears to have increased."
It was a
retrospective study based on interviews of drug abusers who reported smoking crack or
injecting heroin daily. The data from the study provide no indication of what proportion
of marijuana users become serious drug abusers; rather, they indicate that serious drug
abusers usually use marijuana before they smoke crack or inject heroin. Only a small
percentage of the adult population uses crack or heroin daily; during the five-year period
from 1993 to 1997, an average of three people per 1,000 used crack and about two per
132
Many of the data on which the gateway theory is based do not measure dependence;
instead, they measure use--even once-only use. Thus, they show only that marijuana
users are more likely to use other illicit drugs (even if only once) than are people who
never use marijuana, not that they become dependent or even frequent users. The authors
of these studies are careful to point out that their data should not be used as evidence of
an inexorable causal progression; rather they note that identifying stage-based user
groups makes it possible to identify the specific risk factors that predict movement from
25
In the sense that marijuana use typically precedes rather than follows initiation into the use of other illicit drugs, it is indeed a gateway drug. However, it does not appear to be a gateway drug to the extent that it is the cause or even that it is the most significant predictor of serious drug abuse; that is, care must be taken not to attribute cause to association. The most consistent predictors of serious drug use appear to be the intensity
of marijuana use and co-occurring psychiatric disorders or a family history of 78,83
psychopathology (including alcoholism).
1,000 used heroin in the preceding month.
one stage of drug use to the next--the real issue in the gateway discussion.
An important caution is that data on drug use progression pertain to nonmedical drug
use. It does not follow from those data that if marijuana were available by prescription for
medical use, the pattern of drug use would be the same. Kandel and co-workers also
included nonmedical use of prescription psychoactive drugs in their study of drug use
82
progression.
a clear and consistent sequence of drug use involving the abuse of prescription psychoactive drugs. The current data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse among medical marijuana users. Whether the medical use of marijuana might encourage drug abuse among the general community--not among medical marijuana users themselves but among others simply because of the fact that marijuana would be used for medical purposes--is another question.
LINK BETWEEN MEDICAL USE AND DRUG ABUSE
Almost everyone who spoke or wrote to the IOM study team about the potential harms posed by the medical use of marijuana felt that it would send the wrong message to children and teenagers. They stated that information about the harms caused by marijuana is undermined by claims that marijuana might have medical value. Yet many of our powerful medicines are also dangerous medicines. These two facets of medicine-- effectiveness and risk--are inextricably linked.
The question here is not whether marijuana can be both harmful and helpful but whether the perception of its benefits will increase its abuse. For now any answer to the question remains conjecture. Because marijuana is not an approved medicine, there is little information about the consequences of its medical use in modern society. Reasonable inferences might be drawn from some examples. Opiates, such as morphine and codeine, are an example of a class of drugs that is both abused to great harm and used to great medical benefit, and it would be useful to examine the relationship between their medical use and their abuse. In a "natural experiment" during 1973—1978 some states decriminalized marijuana, and others did not. Finally, one can examine the short-term consequences of the publicity surrounding the 1996 medical marijuana campaign in California and ask whether it had any measurable impact on marijuana consumption among youth in California; the consequences of "message" that marijuana might have medical use are examined below.
Medical Use and Abuse of Opiates
Two highly influential papers published in the 1920s and 1950s led to widespread concern among physicians and medical licensing boards that liberal use of opiates would
106
in 1996). Such fears have proven unfounded; it is now recognized that fear of producing addicts through medical
treatment resulted in needless suffering among patients with pain as physicians 27,44
In contrast with the use of alcohol, nicotine, and illicit drugs, there was not
result in many addicts (reviewed by Moulin and co-workers
needlessly limited appropriate doses of medications.
addiction problems with misuse of drugs that have been prescribed for medical use.
Few people begin their drug
114
Opiates are carefully regulated in the medical setting, and diversion of medically prescribed opiates to the black market is not generally considered to be a major problem.
No evidence suggests that the use of opiates or cocaine for medical purposes has increased the perception that their illicit use is safe or acceptable. Clearly, there are risks that patients will abuse marijuana for its psychoactive effects and some likelihood of diversion of marijuana from legitimate medical channels into the illicit market. But those risks do not differentiate marijuana from many accepted medications that are abused by some patients or diverted from medical channels for nonmedical use. Medications with abuse potential are placed in Schedule II of the Controlled Substances Act, which brings them under stricter control, including quotas on the amount that can be legally manufactured (see chapter 5 for discussion of the Controlled Substances Act). That scheduling also signals to physicians that a drug has abuse potential and that they should monitor its use by patients who could be at risk for drug abuse.
Marijuana Decriminalization
Monitoring the Future, the annual survey of values and lifestyles of high school seniors, revealed that high school seniors in decriminalized states reported using no more
72
marijuana than did their counterparts in states where marijuana was not decriminalized. Another study reported somewhat conflicting evidence indicating that decriminalization
105
had increased marijuana use.
Network (DAWN), which has collected data on drug-related emergency room (ER) cases since 1975. There was a greater increase from 1975 to 1978 in the proportion of ER patients who had used marijuana in states that had decriminalized marijuana in 1975— 1976 than in states that had not decriminalized it (Table 3.6). Despite the greater increase among decriminalized states, the proportion of marijuana users among ER patients by 1978 was about equal in states that had and states that had not decriminalized marijuana. That is because the non-decriminalized states had higher rates of marijuana use before decriminalization. In contrast with marijuana use, rates of other illicit drug use among ER patients were substantially higher in states that did not decriminalize marijuana use. Thus, there are different possible reasons for the greater increase in marijuana use in the decriminalized states. On the one hand, decriminalization might have led to an increased use of marijuana (at least among people who sought health care in hospital ERs). On the other hand, the lack of decriminalization might have encouraged greater use of drugs that are even more dangerous than marijuana.
The differences between the results for high school seniors from the Monitoring the
Future study and the DAWN data are unclear, although the author of the latter study
suggests that the reasons might lie in limitations inherent in how the DAWN data are
105
In 1976, the Netherlands adopted a policy of toleration for possession of up to 30 g of marijuana. There was little change in marijuana use during the seven years after the policy change, which suggests that the change itself had little effect; however, in 1984, when Dutch "coffee shops" that sold marijuana commercially spread throughout
That study used data from the Drug Awareness Warning
collected.
Amsterdam, marijuana use began to increase.
continued to increase in the Netherlands at the same rate as in the United States and Norway--two countries that strictly forbid marijuana sale and possession. Furthermore, during this period, approximately equal percentages of American and Dutch 18 year olds used marijuana; Norwegian 18 year olds were about half as likely to have used marijuana. The authors of this study conclude that there is little evidence that the Dutch marijuana depenalization policy led to increased marijuana use, although they note that commercialization of marijuana might have contributed to its increased use. Thus, there is little evidence that decriminalization of marijuana use necessarily leads to a substantial increase in marijuana use.
The Medical Marijuana Debate
The most recent National Household Survey on Drug Abuse showed that among people 12—17 years old the perceived risk associated with smoking marijuana once or
132
(Perceived risk is
measured as the percentage of survey respondents who report that they "perceive great
risk of harm" in using a drug at a specified frequency.) At first glance, that might seem to
validate the fear that the medical marijuana debate of 1996--before passage of the
California medical marijuana referendum in November 1997--had sent a message that
marijuana use is safe. But a closer analysis of the data shows that Californian youth were
an exception to the national trend. In contrast to the national trend, the perceived risk of
1321
In summary, there is no evidence that the medical marijuana debate has altered adolescents'
132
PSYCHOLOGICAL HARMS
In assessing the relative risks and benefits related to the medical use of marijuana, the psychological effects of marijuana can be viewed both as unwanted side effects and as potentially desirable end points in medical treatment. However, the vast majority of research on the psychological effects of marijuana has been in the context of assessing the drug's intoxicating effects when it is used for nonmedical purposes. Thus, the literature does not directly address the effects of marijuana taken for medical purposes.
There are some important caveats to consider in attempting to extrapolate from the research mentioned above to the medical use of marijuana. The circumstances under which psychoactive drugs are taken are an important influence on their psychological effects. Furthermore, research protocols to study marijuana's psychological effects in most instances were required to use participants who already had experience with marijuana. People who might have had adverse reactions to marijuana either would choose not to participate in this type of study or would be screened out by the investigator. Therefore, the incidence of adverse reactions to marijuana that might occur in people with no marijuana experience cannot be estimated from such studies. A further complicating factor concerns the dose regimen used for laboratory studies. In most instances, laboratory research studies have looked at the effects of single doses of
twice a week had decreased significantly between 1996 and 1997.
marijuana use did not change among California youth between 1996 and 1997.
perceptions of the risks associated with marijuana use.
98
During the 1990s, marijuana use has
marijuana, which might be different from those observed when the drug is taken repeatedly for a chronic medical condition.
Nonetheless, laboratory studies are useful in suggesting what psychological functions might be studied when marijuana is evaluated for medical purposes. Results of laboratory studies indicate that acute and chronic marijuana use has pronounced effects on mood, psychomotor, and cognitive functions. These psychological domains should therefore be considered in assessing the relative risks and therapeutic benefits related to marijuana or cannabinoids for any medical condition.
Psychiatric Disorders
A major question remains as to whether marijuana can produce lasting mood disorders
52
or psychotic disorders, such as schizophrenia. Georgotas and Zeidenberg reported that
smoking 10—22 marijuana cigarettes per day was associated with a gradual waning of
the positive mood and social facilitating effects of marijuana and an increase in
irritability, social isolation, and paranoid thinking. Inasmuch as smoking one cigarette is 68,95,118
enough to make a person feel "high" for about 1—3 hours,
the subjects in that
study were taking very high doses of marijuana. Reports have described the development
of apathy, lowered motivation, and impaired educational performance in heavy marijuana
121,122
There are clinical reports of marijuana-induced psychosis-like states (schizophrenia-like,
112
depression, and/or mania) lasting for a week or more.
of the varied nature of the psychotic states induced by marijuana, there is no specific "marijuana psychosis." Rather, the marijuana experience might trigger latent
users who do not appear to be behaviorally impaired in other ways.
psychopathology of many types.
concluded that
disorder.
As noted earlier, drug abuse is common among people with psychiatric
66
60
More recently, Hall and colleagues
"there is reasonable evidence that heavy cannabis use, and perhaps acute use in sensitive
individuals, can produce an acute psychosis in which confusion, amnesia, delusions, hallucinations, anxiety, agitation and hypomanic symptoms predominate." Regardless of which of those interpretations is correct, the two reports agree that there is little evidence that marijuana alone produces a psychosis that persists after the period of intoxication.
Schizophrenia
The association between marijuana and schizophrenia is not well understood. The
scientific literature indicates general agreement that heavy marijuana use can precipitate
schizophrenic episodes but not that marijuana use can cause the underlying psychotic 59,96,151
disorders. Estimates of the prevalence of marijuana use among schizophrenics vary
considerably but are in general agreement that it is at least as great as that among the
general population.
35
Schizophrenics prefer the effects of marijuana to those of alcohol
134
134
and cocaine,
reasons for this are unknown, but it raises the possibility that schizophrenics might obtain some symptomatic relief from moderate marijuana use. But overall, compared with the general population, people with schizophrenia or with a family history of schizophrenia
which they seem to use less often than does the general population.
The
Hollister suggests that, because
are likely to be at greater risk for adverse psychiatric effects from the use of cannabinoids.
Cognition
As discussed earlier, acutely administered marijuana impairs cognition.
60,66,112
Positron emission tomography (PET) imaging allows investigators to measure the acute
effects of marijuana smoking on active brain function. Human volunteers who perform
auditory attention tasks before and after smoking a marijuana cigarette show impaired
performance while under the influence of marijuana; this is associated with substantial
reduction in blood flow to the temporal lobe of the brain, an area that is sensitive to such 116,117
tasks.
Marijuana smoking increases blood flow in other brain regions, such as the 101,155
frontal lobes and lateral cerebellum.
Earlier studies purporting to show structural
22
changes in the brains of heavy marijuana users
have not been replicated with more
sophisticated techniques.
28,89
14,122
Nevertheless, recent studies
marijuana users after a brief period (19—24 hours) of marijuana abstinence. Longer term
140
Although these studies have attempted to match heavy marijuana users with subjects of similar cognitive
abilities before exposure to marijuana use, the adequacy of this matching has been
133
cognitive deficits in heavy marijuana users have also been reported.
have found subtle defects in cognitive tasks in heavy
questioned.
reviewed in an article by Pope and colleagues.
are designed to differentiate between changes in brain function caused the effects of marijuana and by the illness for which marijuana is being given. AIDS dementia is an obvious example of this possible confusion. It is also important to determine whether repeated use of marijuana at therapeutic dosages produces any irreversible cognitive effects.
Psychomotor Performance
Marijuana administration has been reported to affect psychomotor performance on a
23
not only details the studies that have been done but also points out the inconsistencies among studies, the methodological
shortcomings of many studies, and the large individual differences among the studies
attributable to subject, situational, and methodological factors. Those factors must be
considered in studies of psychomotor performance when participants are involved in a
clinical trial of the efficacy of marijuana. The types of psychomotor functions that have
been shown to be disrupted by the acute administration of marijuana include body sway,
hand steadiness, rotary pursuit, driving and flying simulation, divided attention, sustained
attention, and the digit-symbol substitution test. A study of experienced airplane pilots
showed that even 24 hours after a single marijuana cigarette their performance on flight
163
Before the tests, however, they told the study investigators that they were sure their performance would be unaffected.
The complex methodological issues facing research in this area are well
number of tasks. The review by Chait and Pierri
simulator tests was impaired.
121
Care must be exercised so that studies
Cognitive impairments associated with acutely administered marijuana limit the activities that people would be able to do safely or productively. For example, no one under the influence of marijuana or THC should drive a vehicle or operate potentially dangerous equipment.
Amotivational Syndrome
One of the more controversial effects claimed for marijuana is the production of an
"amotivational syndrome." This syndrome is not a medical diagnosis, but it has been used
to describe young people who drop out of social activities and show little interest in
school, work, or other goal-directed activity. When heavy marijuana use accompanies
these symptoms, the drug is often cited as the cause, but no convincing data demonstrate
23
a causal relationship between marijuana smoking and these behavioral characteristics.
is not enough to observe that a chronic marijuana user lacks motivation. Instead, relevant personality traits and behavior of subjects must be assessed before and after the subject becomes a heavy marijuana user. Because such research can only be done on subjects who become heavy marijuana users on their own, a large population study--such as the Epidemiological Catchment Area study described earlier in this chapter--would be needed to shed light on the relationship between motivation and marijuana use. Even then, although a causal relationship between the two could, in theory, be dismissed by an epidemiological study, causality could not be proven. | Whether currently available or unavailable, what is an example of a smokeless cannabis delivery method that clinical trials hope to help develop?
To answer the following question, use only information contained in the context block/prompt. Do not use any previous knowledge or outside sources.
Three focal concerns in evaluating the medical use of marijuana are:
1. Evaluation of the effects of isolated cannabinoids;
2. Evaluation of the risks associated with the medical use of marijuana; and
3. Evaluation of the use of smoked marijuana.
EFFECTS OF ISOLATED CANNABINOIDS
Cannabinoid Biology
Much has been learned since the 1982 IOM report Marijuana and Health. Although it was clear then that most of the effects of marijuana were due to its actions on the brain, there was little information about how THC acted on brain cells (neurons), which cells were affected by THC, or even what general areas of the brain were most affected by THC. In addition, too little was known about cannabinoid physiology to offer any scientific insights into the harmful or therapeutic effects of marijuana. That all changed with the identification and characterization of cannabinoid receptors in the 1980s and 1990s. During the past 16 years, science has advanced greatly and can tell us much more about the potential medical benefits of cannabinoids.
Conclusion: At this point, our knowledge about the biology of marijuana and cannabinoids allows us to make some general conclusions:
o Cannabinoids likely have a natural role in pain modulation, control of movement, and memory.
o The natural role of cannabinoids in immune systems is likely multi-faceted and remains unclear.
o The brain develops tolerance to cannabinoids.
o Animal research demonstrates the potential for dependence, but this
potential is observed under a narrower range of conditions than with
benzodiazepines, opiates, cocaine, or nicotine.
o Withdrawal symptoms can be observed in animals but appear to be mild
compared to opiates or benzodiazepines, such as diazepam (Valium).
Conclusion: The different cannabinoid receptor types found in the body appear to play different roles in normal human physiology. In addition, some effects of cannabinoids appear to be independent of those receptors. The variety of mechanisms through which cannabinoids can influence human physiology underlies the variety of potential therapeutic uses for drugs that might act selectively on different cannabinoid systems.
Recommendation 1: Research should continue into the physiological effects of synthetic and plant-derived cannabinoids and the natural function of cannabinoids found in the body. Because different cannabinoids appear to
have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC alone.
Efficacy of Cannabinoid Drugs
The accumulated data indicate a potential therapeutic value for cannabinoid drugs, particularly for symptoms such as pain relief, control of nausea and vomiting, and appetite stimulation. The therapeutic effects of cannabinoids are best established for THC, which is generally one of the two most abundant of the cannabinoids in marijuana. (Cannabidiol is generally the other most abundant cannabinoid.)
The effects of cannabinoids on the symptoms studied are generally modest, and in most cases there are more effective medications. However, people vary in their responses to medications, and there will likely always be a subpopulation of patients who do not respond well to other medications. The combination of cannabinoid drug effects (anxiety reduction, appetite stimulation, nausea reduction, and pain relief) suggests that cannabinoids would be moderately well suited for particular conditions, such as chemotherapy-induced nausea and vomiting and AIDS wasting.
Defined substances, such as purified cannabinoid compounds, are preferable to plant products, which are of variable and uncertain composition. Use of defined cannabinoids permits a more precise evaluation of their effects, whether in combination or alone. Medications that can maximize the desired effects of cannabinoids and minimize the undesired effects can very likely be identified.
Although most scientists who study cannabinoids agree that the pathways to cannabinoid drug development are clearly marked, there is no guarantee that the fruits of scientific research will be made available to the public for medical use. Cannabinoid- based drugs will only become available if public investment in cannabinoid drug research is sustained and if there is enough incentive for private enterprise to develop and market such drugs.
Conclusion: Scientific data indicate the potential therapeutic value of cannabinoid drugs, primarily THC, for pain relief, control of nausea and vomiting, and appetite stimulation; smoked marijuana, however, is a crude THC delivery system that also delivers harmful substances.
Recommendation 2: Clinical trials of cannabinoid drugs for symptom management should be conducted with the goal of developing rapid-onset, reliable, and safe delivery systems.
Influence of Psychological Effects on Therapeutic Effects
The psychological effects of THC and similar cannabinoids pose three issues for the therapeutic use of cannabinoid drugs. First, for some patients--particularly older patients with no previous marijuana experience--the psychological effects are disturbing. Those
patients report experiencing unpleasant feelings and disorientation after being treated with THC, generally more severe for oral THC than for smoked marijuana. Second, for conditions such as movement disorders or nausea, in which anxiety exacerbates the symptoms, the antianxiety effects of cannabinoid drugs can influence symptoms indirectly. This can be beneficial or can create false impressions of the drug effect. Third, for cases in which symptoms are multifaceted, the combination of THC effects might provide a form of adjunctive therapy; for example, AIDS wasting patients would likely benefit from a medication that simultaneously reduces anxiety, pain, and nausea while stimulating appetite.
Conclusion: The psychological effects of cannabinoids, such as anxiety reduction, sedation, and euphoria can influence their potential therapeutic value. Those effects are potentially undesirable for certain patients and situations and beneficial for others. In addition, psychological effects can complicate the interpretation of other aspects of the drug's effect.
Recommendation 3: Psychological effects of cannabinoids such as anxiety reduction and sedation, which can influence medical benefits, should be evaluated in clinical trials.
RISKS ASSOCIATED WITH MEDICAL USE OF MARIJUANA
Physiological Risks
Marijuana is not a completely benign substance. It is a powerful drug with a variety of effects. However, except for the harms associated with smoking, the adverse effects of marijuana use are within the range of effects tolerated for other medications. The harmful effects to individuals from the perspective of possible medical use of marijuana are not necessarily the same as the harmful physical effects of drug abuse. When interpreting studies purporting to show the harmful effects of marijuana, it is important to keep in mind that the majority of those studies are based on smoked marijuana, and cannabinoid effects cannot be separated from the effects of inhaling smoke from burning plant material and contaminants.
For most people the primary adverse effect of acute marijuana use is diminished psychomotor performance. It is, therefore, inadvisable to operate any vehicle or potentially dangerous equipment while under the influence of marijuana, THC, or any cannabinoid drug with comparable effects. In addition, a minority of marijuana users experience dysphoria, or unpleasant feelings. Finally, the short-term immunosuppressive effects are not well established but, if they exist, are not likely great enough to preclude a legitimate medical use.
The chronic effects of marijuana are of greater concern for medical use and fall into two categories: the effects of chronic smoking and the effects of THC. Marijuana smoking is associated with abnormalities of cells lining the human respiratory tract. Marijuana smoke, like tobacco smoke, is associated with increased risk of cancer, lung
damage, and poor pregnancy outcomes. Although cellular, genetic, and human studies all suggest that marijuana smoke is an important risk factor for the development of respiratory cancer, proof that habitual marijuana smoking does or does not cause cancer awaits the results of well-designed studies.
Conclusion: Numerous studies suggest that marijuana smoke is an important risk factor in the development of respiratory disease.
Recommendation 4: Studies to define the individual health risks of smoking marijuana should be conducted, particularly among populations in which marijuana use is prevalent.
Marijuana Dependence and Withdrawal
A second concern associated with chronic marijuana use is dependence on the psychoactive effects of THC. Although few marijuana users develop dependence, some do. Risk factors for marijuana dependence are similar to those for other forms of substance abuse. In particular, anti-social personality and conduct disorders are closely associated with substance abuse.
Conclusion: A distinctive marijuana withdrawal syndrome has been identified, but it is mild and short lived. The syndrome includes restlessness, irritability, mild agitation, insomnia, sleep disturbance, nausea, and cramping.
Marijuana as a "Gateway" Drug
Patterns in progression of drug use from adolescence to adulthood are strikingly regular. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug most people encounter. Not surprisingly, most users of other illicit drugs have used marijuana first. In fact, most drug users begin with alcohol and nicotine before marijuana--usually before they are of legal age.
In the sense that marijuana use typically precedes rather than follows initiation of other illicit drug use, it is indeed a "gateway" drug. But because underage smoking and alcohol use typically precede marijuana use, marijuana is not the most common, and is rarely the first, "gateway" to illicit drug use. There is no conclusive evidence that the drug effects of marijuana are causally linked to the subsequent abuse of other illicit drugs. An important caution is that data on drug use progression cannot be assumed to apply to the use of drugs for medical purposes. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would remain the same as seen in illicit use.
Finally, there is a broad social concern that sanctioning the medical use of marijuana might increase its use among the general population. At this point there are no convincing data to support this concern. The existing data are consistent with the idea that this would
not be a problem if the medical use of marijuana were as closely regulated as other medications with abuse potential.
Conclusion: Present data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse. However, this question is beyond the issues normally considered for medical uses of drugs and should not be a factor in evaluating the therapeutic potential of marijuana or cannabinoids.
USE OF SMOKED MARIJUANA
Because of the health risks associated with smoking, smoked marijuana should generally not be recommended for long-term medical use. Nonetheless, for certain patients, such as the terminally ill or those with debilitating symptoms, the long-term risks are not of great concern. Further, despite the legal, social, and health problems associated with smoking marijuana, it is widely used by certain patient groups.
Recommendation 5: Clinical trials of marijuana use for medical purposes should be conducted under the following limited circumstances: trials should involve only short-term marijuana use (less than six months), should be conducted in patients with conditions for which there is reasonable expectation of efficacy, should be approved by institutional review boards, and should collect data about efficacy.
The goal of clinical trials of smoked marijuana would not be to develop marijuana as a licensed drug but rather to serve as a first step toward the possible development of nonsmoked rapid-onset cannabinoid delivery systems. However, it will likely be many years before a safe and effective cannabinoid delivery system, such as an inhaler, is available for patients. In the meantime there are patients with debilitating symptoms for whom smoked marijuana might provide relief. The use of smoked marijuana for those patients should weigh both the expected efficacy of marijuana and ethical issues in patient care, including providing information about the known and suspected risks of smoked marijuana use.
Recommendation 6: Short-term use of smoked marijuana (less than six months) for patients with debilitating symptoms (such as intractable pain or vomiting) must meet the following conditions:
o failure of all approved medications to provide relief has been documented,
o the symptoms can reasonably be expected to be relieved by rapid- onset cannabinoid drugs,
o such treatment is administered under medical supervision in a manner that allows for assessment of treatment effectiveness, and
o involves an oversight strategy comparable to an institutional review board process that could provide guidance within 24 hours of a
submission by a physician to provide marijuana to a patient for a specified use.
Until a nonsmoked rapid-onset cannabinoid drug delivery system becomes available, we acknowledge that there is no clear alternative for people suffering from chronic conditions that might be relieved by smoking marijuana, such as pain or AIDS wasting. One possible approach is to treat patients as n-of-1 clinical trials (single-patient trials), in which patients are fully informed of their status as experimental subjects using a harmful drug delivery system and in which their condition is closely monitored and documented under medical supervision, thereby increasing the knowledge base of the risks and benefits of marijuana use under such conditions.data, it is important to understand that decisions about drug regulation are based on a variety of moral and social considerations, as well as on medical and scientific ones.
Even when a drug is used only for medical purposes, value judgments affect policy decisions concerning its medical use. For example, the magnitude of a drug's expected medical benefit affects regulatory judgments about the acceptability of risks associated with its use. Also, although a drug is normally approved for medical use only on proof of its "safety and efficacy," patients with life-threatening conditions are sometimes (under protocols for "compassionate use") allowed access to unapproved drugs whose benefits and risks are uncertain. Value judgments play an even more substantial role in regulatory decisions concerning drugs, such as marijuana, that are sought and used for nonmedical purposes. Then policymakers must take into account not only the risks and benefits associated with medical use but also possible interactions between the regulatory arrangements governing medical use and the integrity of the legal controls set up to restrict nonmedical use.
It should be clear that many elements of drug control policy lie outside the realm of biology and medicine. Ultimately, the complex moral and social judgments that underlie drug control policy must be made by the American people and their elected officials. A goal of this report is to evaluate the biological and medical factors that should be taken into account in making those judgments.
HOW THIS STUDY WAS CONDUCTED
Information was gathered through scientific workshops, site visits, analysis of the relevant scientific literature, and extensive consultation with biomedical and social scientists. The three 2-day workshops--in Irvine, California; New Orleans, Louisiana; and Washington, D.C.--were open to the public and included scientific presentations and reports, mostly from patients and their families, about their experiences with and perspectives on the medical use of marijuana. Scientific experts in various fields were selected to talk about the latest research on marijuana, cannabinoids, and related topics (listed in Appendix B). Selection of the experts was based on recommendations by their peers, who ranked them among the most accomplished scientists and the most knowledgeable about marijuana and cannabinoids in their own fields. In addition, advocates for (John Morgan) and against (Eric A. Voth) the medical use of marijuana were invited to present scientific evidence in support of their positions.
Information presented at the scientific workshops was supplemented by analysis of the scientific literature and evaluating the methods used in various studies and the validity of the authors' conclusions. Different kinds of clinical studies are useful in different ways: results of a controlled double-blind study with adequate sample sizes can be expected to apply to the general population from which study subjects were drawn; an isolated case report can suggest further studies but cannot be presumed to be broadly applicable; and survey data can be highly informative but are generally limited by the need to rely on self-reports of drug use and on unconfirmed medical diagnoses. This report relies mainly on the most relevant and methodologically rigorous studies available and treats the results
of more limited studies cautiously. In addition, study results are presented in such a way as to allow thoughtful readers to judge the results themselves.
The Institute of Medicine (IOM) appointed a panel of nine experts to advise the study team on technical issues. These included neurology and the treatment of pain (Howard Fields); regulation of prescription drugs (J. Richard Crout); AIDS wasting and clinical trials (Judith Feinberg); treatment and pathology of multiple sclerosis (Timothy Vollmer); drug dependence among adolescents (Thomas Crowley); varieties of drug dependence (Dorothy Hatsukami); internal medicine, health care delivery, and clinical epidemiology (Eric B. Larson); cannabinoids and marijuana pharmacology (Billy R. Martin); and cannabinoid neuroscience (Steven R. Childers).
Public outreach included setting up a Web site that provided information about the study and asked for input from the public. The Web site was open for comment from November 1997 until November 1998. Some 130 organizations were invited to participate in the public workshops. Many people in the organizations--particularly those opposed to the medical use of marijuana--felt that a public forum was not conducive to expressing their views; they were invited to communicate their opinions (and reasons for holding them) by mail or telephone. As a result, roughly equal numbers of persons and organizations opposed to and in favor of the medical use of marijuana were heard from.
The study team visited four cannabis buyers' clubs in California (the Oakland
Cannabis Buyers' Cooperative, the San Francisco Cannabis Cultivators Club, the Los
Angeles Cannabis Resource Center, and Californians Helping Alleviate Medical
Problems, or CHAMPS) and two HIV/AIDS clinics (AIDS Health Care Foundation in
Los Angeles and Louisiana State University Medical Center in New Orleans). We
listened to many individual stories from the buyers' clubs about using marijuana to treat a
variety of symptoms and heard clinical observations on the use of Marinol to treat AIDS
9
patients. Marinol is the brand name for dronabinol, which is
(THC) in pill form and is available by prescription for the treatment of nausea associated with chemotherapy and AIDS wasting.
MARIJUANA TODAY
The Changing Legal Landscape
In the 20th century, marijuana has been used more for its euphoric effects than as a
medicine. Its psychological and behavioral effects have concerned public officials since
the drug first appeared in the southwestern and southern states during the first two
decades of the century. By 1931, at least 29 states had prohibited use of the drug for
3
nonmedicalpurposes. MarijuanawasfirstregulatedatthefederallevelbytheMarijuana Tax Act of 1937, which required anyone producing, distributing, or using marijuana for medical purposes to register and pay a tax and which effectively prohibited nonmedical use of the drug. Although the act did not make medical use of marijuana illegal, it did make it expensive and inconvenient. In 1942, marijuana was removed from the U.S.
-tetrahydrocannabinol
Pharmacopoeia because it was believed to be a harmful and addictive drug that caused psychoses, mental deterioration, and violent behavior.
In the late 1960s and early 1970s, there was a sharp increase in marijuana use among
adolescents and young adults. The current legal status of marijuana was established in
1970 with the passage of the Controlled Substances Act, which divided drugs into five
schedules and placed marijuana in Schedule I, the category for drugs with high potential
for abuse and no accepted medical use (see Appendix C, Scheduling Definitions). In
1972, the National Organization for the Reform of Marijuana Legislation (NORML), an
organization that supports decriminalization of marijuana, unsuccessfully petitioned the
Bureau of Narcotics and Dangerous Drugs to move marijuana from Schedule I to
Schedule II. NORML argued that marijuana is therapeutic in numerous serious ailments,
13
less toxic, and in many cases more effective than conventional medicines.
years the medical marijuana movement has been closely linked with the marijuana decriminalization movement, which has colored the debate. Many people criticized that association in their letters to IOM and during the public workshops of this study. The argument against the medical use of marijuana presented most often to the IOM study team was that "the medical marijuana movement is a Trojan horse"; that is, it is a deceptive tactic used by advocates of marijuana decriminalization who would exploit the public's sympathy for seriously ill patients.
Since NORML's petition in 1972, there have been a variety of legal decisions
concerning marijuana. From 1973 to 1978, 11 states adopted statutes that decriminalized
use of marijuana, although some of them recriminalized marijuana use in the 1980s and
1990s. During the 1970s, reports of the medical value of marijuana began to appear,
particularly claims that marijuana relieved the nausea associated with chemotherapy.
Health departments in six states conducted small studies to investigate the reports. When
the AIDS epidemic spread in the 1980s, patients found that marijuana sometimes relieved
their symptoms, most dramatically those associated with AIDS wasting. Over this period
a number of defendants charged with unlawful possession of marijuana claimed that they
were using the drug to treat medical conditions and that violation of the law was therefore
justified (the so-called medical necessity defense). Although most courts rejected these
8
Against that backdrop, voters in California and Arizona in 1996 passed two referenda
that attempted to legalize the medical use of marijuana under particular conditions. Public
support for patient access to marijuana for medical use appears substantial; public
opinion polls taken during 1997 and 1998 generally reported 60—70 percent of
15
However, those referenda are at odds with federal laws regulating marijuana, and their implementation raises
complex legal questions.
Despite the current level of interest, referenda and public discussions have not been well informed by carefully reasoned scientific debate. Although previous reports have all called for more research, the nature of the research that will be most helpful depends greatly on the specific health conditions to be addressed. And while there have been
claims, some accepted them.
respondents in favor of allowing medical uses of marijuana.
Thus, for 25
important recent advances in our understanding of the physiological effects of marijuana, few of the recent investigators have had the time or resources to permit detailed analysis. The results of those advances, only now beginning to be explored, have significant implications for the medical marijuana debate.
Several months after the passage of the California and Arizona medical marijuana referendums, the Office of National Drug Control Policy (ONDCP) asked whether IOM would conduct a scientific review of the medical value of marijuana and its constituent compounds. In August 1997, IOM formally began the study and appointed John A. Benson Jr. and Stanley J. Watson Jr. to serve as principal investigators for the study. The charge to IOM was to review the medical use of marijuana and the harms and benefits attributed to it (details are given in Appendix D).
Marijuana plants have been used since antiquity for both herbal medication and intoxication. The current debate over the medical use of marijuana is essentially a debate over the value of its medicinal properties relative to the risk posed by its use.
1,10,11
Marijuana's use as an herbal remedy before the 20th century is well documented.
However, modern medicine adheres to different standards from those used in the past.
The question is not whether marijuana can be used as an herbal remedy but rather how
well this remedy meets today's standards of efficacy and safety. We understand much
more than previous generations about medical risks. Our society generally expects its
licensed medications to be safe, reliable, and of proven efficacy; contaminants and
inconsistent ingredients in our health treatments are not tolerated. That refers not only to
prescription and over-the-counter drugs but also to vitamin supplements and herbal
remedies purchased at the grocery store. For example, the essential amino acid l-
tryptophan was widely sold in health food stores as a natural remedy for insomnia until
early 1990 when it became linked to an epidemic of a new and potentially fatal illness 9,12
When it was removed from the market shortly thereafter, there was little protest, despite the fact that it was safe for the vast majority of
the population. The 1,536 cases and 27 deaths were later traced to contaminants in a batch produced by a single Japanese manufacturer.
Although few herbal medicines meet today's standards, they have provided the foundation for modern Western pharmaceuticals. Most current prescriptions have their
7
rootseitherdirectlyorindirectlyinplantremedies. Atthesametime,mostcurrent prescriptions are synthetic compounds that are only distantly related to the natural compounds that led to their development. Digitalis was discovered in foxglove, morphine in poppies, and taxol in the yew tree. Even aspirin (acetylsalicylic acid) has its counterpart in herbal medicine: for many generations, American Indians relieved headaches by chewing the bark of the willow tree, which is rich in a related form of salicylic acid.
Although plants continue to be valuable resources for medical advances, drug development is likely to be less and less reliant on plants and more reliant on the tools of
(eosinophilia-myalgia syndrome).
modern science. Molecular biology, bioinformatics software, and DNA array-based analyses of genes and chemistry are all beginning to yield great advances in drug discovery and development. Until recently, drugs could only be discovered; now they can be designed. Even the discovery process has been accelerated through the use of modern drug-screening techniques. It is increasingly possible to identify or isolate the chemical compounds in a plant, determine which compounds are responsible for the plant's effects, and select the most effective and safe compounds--either for use as purified substances or as tools to develop even more effective, safer, or less expensive compounds.
Yet even as the modern pharmacological toolbox becomes more sophisticated and biotechnology yields an ever greater abundance of therapeutic drugs, people increasingly
4,5
In 1997, 46 percent of Americans sought
nontraditional medicines and spent over 27 billion unreimbursed dollars; the total number
of visits to alternative medicine practitioners appears to have exceeded the number of
5,6
Recent interest in the medical use of marijuana
coincides with this trend toward self-help and a search for "natural" therapies. Indeed,
several people who spoke at the IOM public hearings in support of the medical use of
marijuana said that they generally preferred herbal medicines to standard
pharmaceuticals. However, few alternative therapies have been carefully and
systematically tested for safety and efficacy, as is required for medications approved by
2
WHO USES MEDICAL MARIJUANA?
There have been no comprehensive surveys of the demographics and medical conditions of medical marijuana users, but a few reports provide some indication. In each case, survey results should be understood to reflect the situation in which they were conducted and are not necessarily characteristic of medical marijuana users as a whole. Respondents to surveys reported to the IOM study team were all members of "buyers' clubs," organizations that provide their members with marijuana, although not necessarily through direct cash transactions. The atmosphere of the marijuana buyers' clubs ranges from that of the comparatively formal and closely regulated Oakland Cannabis Buyers' Cooperative to that of a "country club for the indigent," as Denis Peron described the San Francisco Cannabis Cultivators Club (SFCCC), which he directed.
John Mendelson, an internist and pharmacologist at the University of California, San Francisco (UCSF) Pain Management Center, surveyed 100 members of the SFCCC who were using marijuana at least weekly. Most of the respondents were unemployed men in their forties. Subjects were paid $50 to participate in the survey; this might have encouraged a greater representation of unemployed subjects. All subjects were tested for drug use. About half tested positive for marijuana only; the other half tested positive for drugs in addition to marijuana (23% for cocaine and 13% for amphetamines). The predominant disorder was AIDS, followed by roughly equal numbers of members who reported chronic pain, mood disorders, and musculoskeletal disorders (Table 1.1).
seek alternative, low-technology therapies.
visits to primary care physicians.
the FDA (Food and Drug Administration).
The membership profile of the San Francisco club was similar to that of the Los Angeles Cannabis Resource Center (LACRC), where 83% of the 739 patients were men, 45% were 36—45 years old, and 71% were HIV positive. Table 1.2 shows a distribution of conditions somewhat different from that in SFCCC respondents, probably because of a different membership profile. For example, cancer is generally a disease that occurs late in life; 34 (4.7%) of LACRC members were over 55 years old; only 2% of survey respondents in the SFCCC study were over 55 years old.
Jeffrey Jones, executive director of the Oakland Cannabis Buyers' Cooperative, reported that its largest group of patients is HIV-positive men in their forties. The second- largest group is patients with chronic pain.
Among the 42 people who spoke at the public workshops or wrote to the study team, only six identified themselves as members of marijuana buyers' clubs. Nonetheless, they presented a similar profile: HIV/AIDS was the predominant disorder, followed by chronic pain (Tables 1.3 and 1.4). All HIV/AIDS patients reported that marijuana relieved nausea and vomiting and improved their appetite. About half the patients who reported using marijuana for chronic pain also reported that it reduced nausea and vomiting.
Note that the medical conditions referred to are only those reported to the study team or to interviewers; they cannot be assumed to represent complete or accurate diagnoses. Michael Rowbotham, a neurologist at the UCSF Pain Management Center, noted that many pain patients referred to that center arrive with incorrect diagnoses or with pain of unknown origin. At that center the patients who report medical benefit from marijuana say that it does not reduce their pain but enables them to cope with it.
Most--not all--people who use marijuana to relieve medical conditions have previously used it recreationally. An estimated 95% of the LACRC members had used marijuana before joining the club. It is important to emphasize the absence of comprehensive information on marijuana use before its use for medical conditions. Frequency of prior use almost certainly depends on many factors, including membership in a buyers' club, membership in a population sector that uses marijuana more often than others (for example, men 20—30 years old), and the medical condition being treated with marijuana (for example, there are probably relatively fewer recreational marijuana users among cancer patients than among AIDS patients).
Patients who reported their experience with marijuana at the public workshops said that marijuana provided them with great relief from symptoms associated with disparate diseases and ailments, including AIDS wasting, spasticity from multiple sclerosis, depression, chronic pain, and nausea associated with chemotherapy. Their circumstances and symptoms were varied, and the IOM study team was not in a position to make medical evaluations or confirm diagnoses. Three representative cases presented to the IOM study team are presented in Box 1.1; the stories have been edited for brevity, but each case is presented in the patient's words and with the patient's permission.
The variety of stories presented left the study team with a clear view of people's beliefs about how marijuana had helped them. But this collection of anecdotal data, although useful, is limited. We heard many positive stories but no stories from people who had tried marijuana but found it ineffective. This is a fraction with an unknown denominator. For the numerator we have a sample of positive responses; for the denominator we have no idea of the total number of people who have tried marijuana for medical purposes. Hence, it is impossible to estimate the clinical value of marijuana or cannabinoids in the general population based on anecdotal reports. Marijuana clearly seems to relieve some symptoms for some people--even if only as a placebo effect. But what is the balance of harmful and beneficial effects? That is the essential medical question that can be answered only by careful analysis of data collected under controlled conditions.
CANNABIS AND THE CANNABINOIDS
Marijuana is the common name for Cannabis sativa, a hemp plant that grows throughout temperate and tropical climates. The most recent review of the constituents of
16
marijuana lists 66 cannabinoids (Table 1.5).
different cannabinoid effects or interactions. Most of the cannabinoids are closely related; they fall into only 10 groups of closely related cannabinoids, many of which differ by only a single chemical moiety and might be midpoints along biochemical pathways--that
-tetrahydrocannabinol ( THC) is the primary psychoactive ingredient; depending on the particular plant, either
THC or cannabidiol is the most abundant cannabinoid in marijuana (Figure 1.1). 9
is, degradation products, precursors, or byproducts.
-
But that does not mean there are 66
16,18 9
9
Throughout this report, THC is used to indicate
of THC are discussed, the full names are used. All the cannabinoids are lipophilic--they are highly soluble in fatty fluids and tissues but not in water. Indeed, THC is so lipophilic that it is aptly described as "greasy."
Throughout this report, marijuana refers to unpurified plant extracts, including leaves and flower tops, regardless of how they are consumed--whether by ingestion or by smoking. References to the effects of marijuana should be understood to include the composite effects of its various components; that is, the effects of THC are included among the effects of marijuana, but not all the effects of marijuana are necessarily due to THC. Discussions concerning cannabinoids refer only to those particular compounds and not to the plant extract. This distinction is important; it is often blurred or exaggerated.
Cannabinoids are produced in epidermal glands on the leaves (especially the upper
ones), stems, and the bracts that support the flowers of the marijuana plant. Although the
flower itself has no epidermal glands, it has the highest cannabinoid content anywhere on
the plant, probably because of the accumulation of resin secreted by the supporting
bracteole (the small leaf-like part below the flower). The amounts of cannabinoids and
their relative abundance in a marijuana plant vary with growing conditions, including
14
). The chemical stability of cannabinoids in harvested plant material is also affected by moisture,
temperature, sunlight, and storage. They degrade under any storage condition.
humidity, temperature, and soil nutrients (reviewed in Pate, 1994
-THC. In the few cases where variants
ORGANIZATION OF THE REPORT
Throughout the report, steps that might be taken to fill the gaps in understanding both the potential harms and benefits of marijuana and cannabinoid use are identified. Those steps include identifying knowledge gaps, promising research directions, and potential therapies based on scientific advances in cannabinoid biology.
Chapter 2 reviews basic cannabinoid biology and provides a foundation to understand the medical value of marijuana or its constituent cannabinoids. In consideration of the physician's first rule, "first, do no harm," the potential harms attributed to the medical use of marijuana are reviewed before the potential medical benefits. Chapter 3 reviews the risks posed by marijuana use, with emphasis on medical use.
Chapter 4 analyzes the most credible clinical data relevant to the medical use of marijuana. It reviews what is known about the physiological mechanisms underlying particular conditions (for example, chronic pain, vomiting, anorexia, and muscle spasticity), what is known about the cellular actions of cannabinoids, and the levels of proof needed to show that marijuana is an effective treatment for specific symptoms. It does not analyze the historical literature; history is informative in enumerating uses of marijuana, but it does not provide the sort of information needed for a scientifically sound evaluation of the efficacy and safety of marijuana for clinical use. Because marijuana is advocated primarily as affording relief from the symptoms of disease rather than as a cure, this chapter is organized largely by symptoms as opposed to disease categories. Finally, chapter 4 compares the conclusions of this report with those of other recent reports on the medical use of marijuana.
Chapter 5 describes the process of and analyzes the prospects for cannabinoid drug development.
Three focal concerns in evaluating the medical use of marijuana are:
1. Evaluation of the effects of isolated cannabinoids;
2. Evaluation of the risks associated with the medical use of marijuana; and
3. Evaluation of the use of smoked marijuana.
EFFECTS OF ISOLATED CANNABINOIDS
Cannabinoid Biology
Much has been learned since the 1982 IOM report Marijuana and Health. Although it was clear then that most of the effects of marijuana were due to its actions on the brain, there was little information about how THC acted on brain cells (neurons), which cells were affected by THC, or even what general areas of the brain were most affected by THC. In addition, too little was known about cannabinoid physiology to offer any scientific insights into the harmful or therapeutic effects of marijuana. That all changed with the identification and characterization of cannabinoid receptors in the 1980s and 1990s. During the past 16 years, science has advanced greatly and can tell us much more about the potential medical benefits of cannabinoids.
Conclusion: At this point, our knowledge about the biology of marijuana and cannabinoids allows us to make some general conclusions:
o Cannabinoids likely have a natural role in pain modulation, control of movement, and memory.
o The natural role of cannabinoids in immune systems is likely multi-faceted and remains unclear.
o The brain develops tolerance to cannabinoids.
o Animal research demonstrates the potential for dependence, but this
potential is observed under a narrower range of conditions than with
benzodiazepines, opiates, cocaine, or nicotine.
o Withdrawal symptoms can be observed in animals but appear to be mild
compared to opiates or benzodiazepines, such as diazepam (Valium).
Conclusion: The different cannabinoid receptor types found in the body appear to play different roles in normal human physiology. In addition, some effects of cannabinoids appear to be independent of those receptors. The variety of mechanisms through which cannabinoids can influence human physiology underlies the variety of potential therapeutic uses for drugs that might act selectively on different cannabinoid systems.
Recommendation 1: Research should continue into the physiological effects of synthetic and plant-derived cannabinoids and the natural function of cannabinoids found in the body. Because different cannabinoids appear to
have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC alone.
Efficacy of Cannabinoid Drugs
The accumulated data indicate a potential therapeutic value for cannabinoid drugs, particularly for symptoms such as pain relief, control of nausea and vomiting, and appetite stimulation. The therapeutic effects of cannabinoids are best established for THC, which is generally one of the two most abundant of the cannabinoids in marijuana. (Cannabidiol is generally the other most abundant cannabinoid.)
The effects of cannabinoids on the symptoms studied are generally modest, and in most cases there are more effective medications. However, people vary in their responses to medications, and there will likely always be a subpopulation of patients who do not respond well to other medications. The combination of cannabinoid drug effects (anxiety reduction, appetite stimulation, nausea reduction, and pain relief) suggests that cannabinoids would be moderately well suited for particular conditions, such as chemotherapy-induced nausea and vomiting and AIDS wasting.
Defined substances, such as purified cannabinoid compounds, are preferable to plant products, which are of variable and uncertain composition. Use of defined cannabinoids permits a more precise evaluation of their effects, whether in combination or alone. Medications that can maximize the desired effects of cannabinoids and minimize the undesired effects can very likely be identified.
Although most scientists who study cannabinoids agree that the pathways to cannabinoid drug development are clearly marked, there is no guarantee that the fruits of scientific research will be made available to the public for medical use. Cannabinoid- based drugs will only become available if public investment in cannabinoid drug research is sustained and if there is enough incentive for private enterprise to develop and market such drugs.
Conclusion: Scientific data indicate the potential therapeutic value of cannabinoid drugs, primarily THC, for pain relief, control of nausea and vomiting, and appetite stimulation; smoked marijuana, however, is a crude THC delivery system that also delivers harmful substances.
Recommendation 2: Clinical trials of cannabinoid drugs for symptom management should be conducted with the goal of developing rapid-onset, reliable, and safe delivery systems.
Influence of Psychological Effects on Therapeutic Effects
The psychological effects of THC and similar cannabinoids pose three issues for the therapeutic use of cannabinoid drugs. First, for some patients--particularly older patients with no previous marijuana experience--the psychological effects are disturbing. Those
patients report experiencing unpleasant feelings and disorientation after being treated with THC, generally more severe for oral THC than for smoked marijuana. Second, for conditions such as movement disorders or nausea, in which anxiety exacerbates the symptoms, the antianxiety effects of cannabinoid drugs can influence symptoms indirectly. This can be beneficial or can create false impressions of the drug effect. Third, for cases in which symptoms are multifaceted, the combination of THC effects might provide a form of adjunctive therapy; for example, AIDS wasting patients would likely benefit from a medication that simultaneously reduces anxiety, pain, and nausea while stimulating appetite.
Conclusion: The psychological effects of cannabinoids, such as anxiety reduction, sedation, and euphoria can influence their potential therapeutic value. Those effects are potentially undesirable for certain patients and situations and beneficial for others. In addition, psychological effects can complicate the interpretation of other aspects of the drug's effect.
Recommendation 3: Psychological effects of cannabinoids such as anxiety reduction and sedation, which can influence medical benefits, should be evaluated in clinical trials.
RISKS ASSOCIATED WITH MEDICAL USE OF MARIJUANA
Physiological Risks
Marijuana is not a completely benign substance. It is a powerful drug with a variety of effects. However, except for the harms associated with smoking, the adverse effects of marijuana use are within the range of effects tolerated for other medications. The harmful effects to individuals from the perspective of possible medical use of marijuana are not necessarily the same as the harmful physical effects of drug abuse. When interpreting studies purporting to show the harmful effects of marijuana, it is important to keep in mind that the majority of those studies are based on smoked marijuana, and cannabinoid effects cannot be separated from the effects of inhaling smoke from burning plant material and contaminants.
For most people the primary adverse effect of acute marijuana use is diminished psychomotor performance. It is, therefore, inadvisable to operate any vehicle or potentially dangerous equipment while under the influence of marijuana, THC, or any cannabinoid drug with comparable effects. In addition, a minority of marijuana users experience dysphoria, or unpleasant feelings. Finally, the short-term immunosuppressive effects are not well established but, if they exist, are not likely great enough to preclude a legitimate medical use.
The chronic effects of marijuana are of greater concern for medical use and fall into two categories: the effects of chronic smoking and the effects of THC. Marijuana smoking is associated with abnormalities of cells lining the human respiratory tract. Marijuana smoke, like tobacco smoke, is associated with increased risk of cancer, lung
damage, and poor pregnancy outcomes. Although cellular, genetic, and human studies all suggest that marijuana smoke is an important risk factor for the development of respiratory cancer, proof that habitual marijuana smoking does or does not cause cancer awaits the results of well-designed studies.
Conclusion: Numerous studies suggest that marijuana smoke is an important risk factor in the development of respiratory disease.
Recommendation 4: Studies to define the individual health risks of smoking marijuana should be conducted, particularly among populations in which marijuana use is prevalent.
Marijuana Dependence and Withdrawal
A second concern associated with chronic marijuana use is dependence on the psychoactive effects of THC. Although few marijuana users develop dependence, some do. Risk factors for marijuana dependence are similar to those for other forms of substance abuse. In particular, anti-social personality and conduct disorders are closely associated with substance abuse.
Conclusion: A distinctive marijuana withdrawal syndrome has been identified, but it is mild and short lived. The syndrome includes restlessness, irritability, mild agitation, insomnia, sleep disturbance, nausea, and cramping.
Marijuana as a "Gateway" Drug
Patterns in progression of drug use from adolescence to adulthood are strikingly regular. Because it is the most widely used illicit drug, marijuana is predictably the first illicit drug most people encounter. Not surprisingly, most users of other illicit drugs have used marijuana first. In fact, most drug users begin with alcohol and nicotine before marijuana--usually before they are of legal age.
In the sense that marijuana use typically precedes rather than follows initiation of other illicit drug use, it is indeed a "gateway" drug. But because underage smoking and alcohol use typically precede marijuana use, marijuana is not the most common, and is rarely the first, "gateway" to illicit drug use. There is no conclusive evidence that the drug effects of marijuana are causally linked to the subsequent abuse of other illicit drugs. An important caution is that data on drug use progression cannot be assumed to apply to the use of drugs for medical purposes. It does not follow from those data that if marijuana were available by prescription for medical use, the pattern of drug use would remain the same as seen in illicit use.
Finally, there is a broad social concern that sanctioning the medical use of marijuana might increase its use among the general population. At this point there are no convincing data to support this concern. The existing data are consistent with the idea that this would
not be a problem if the medical use of marijuana were as closely regulated as other medications with abuse potential.
Conclusion: Present data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse. However, this question is beyond the issues normally considered for medical uses of drugs and should not be a factor in evaluating the therapeutic potential of marijuana or cannabinoids.
USE OF SMOKED MARIJUANA
Because of the health risks associated with smoking, smoked marijuana should generally not be recommended for long-term medical use. Nonetheless, for certain patients, such as the terminally ill or those with debilitating symptoms, the long-term risks are not of great concern. Further, despite the legal, social, and health problems associated with smoking marijuana, it is widely used by certain patient groups.
Recommendation 5: Clinical trials of marijuana use for medical purposes should be conducted under the following limited circumstances: trials should involve only short-term marijuana use (less than six months), should be conducted in patients with conditions for which there is reasonable expectation of efficacy, should be approved by institutional review boards, and should collect data about efficacy.
The goal of clinical trials of smoked marijuana would not be to develop marijuana as a licensed drug but rather to serve as a first step toward the possible development of nonsmoked rapid-onset cannabinoid delivery systems. However, it will likely be many years before a safe and effective cannabinoid delivery system, such as an inhaler, is available for patients. In the meantime there are patients with debilitating symptoms for whom smoked marijuana might provide relief. The use of smoked marijuana for those patients should weigh both the expected efficacy of marijuana and ethical issues in patient care, including providing information about the known and suspected risks of smoked marijuana use.
Recommendation 6: Short-term use of smoked marijuana (less than six months) for patients with debilitating symptoms (such as intractable pain or vomiting) must meet the following conditions:
o failure of all approved medications to provide relief has been documented,
o the symptoms can reasonably be expected to be relieved by rapid- onset cannabinoid drugs,
o such treatment is administered under medical supervision in a manner that allows for assessment of treatment effectiveness, and
o involves an oversight strategy comparable to an institutional review board process that could provide guidance within 24 hours of a
submission by a physician to provide marijuana to a patient for a specified use.
Until a nonsmoked rapid-onset cannabinoid drug delivery system becomes available, we acknowledge that there is no clear alternative for people suffering from chronic conditions that might be relieved by smoking marijuana, such as pain or AIDS wasting. One possible approach is to treat patients as n-of-1 clinical trials (single-patient trials), in which patients are fully informed of their status as experimental subjects using a harmful drug delivery system and in which their condition is closely monitored and documented under medical supervision, thereby increasing the knowledge base of the risks and benefits of marijuana use under such conditions.data, it is important to understand that decisions about drug regulation are based on a variety of moral and social considerations, as well as on medical and scientific ones.
Even when a drug is used only for medical purposes, value judgments affect policy decisions concerning its medical use. For example, the magnitude of a drug's expected medical benefit affects regulatory judgments about the acceptability of risks associated with its use. Also, although a drug is normally approved for medical use only on proof of its "safety and efficacy," patients with life-threatening conditions are sometimes (under protocols for "compassionate use") allowed access to unapproved drugs whose benefits and risks are uncertain. Value judgments play an even more substantial role in regulatory decisions concerning drugs, such as marijuana, that are sought and used for nonmedical purposes. Then policymakers must take into account not only the risks and benefits associated with medical use but also possible interactions between the regulatory arrangements governing medical use and the integrity of the legal controls set up to restrict nonmedical use.
It should be clear that many elements of drug control policy lie outside the realm of biology and medicine. Ultimately, the complex moral and social judgments that underlie drug control policy must be made by the American people and their elected officials. A goal of this report is to evaluate the biological and medical factors that should be taken into account in making those judgments.
HOW THIS STUDY WAS CONDUCTED
Information was gathered through scientific workshops, site visits, analysis of the relevant scientific literature, and extensive consultation with biomedical and social scientists. The three 2-day workshops--in Irvine, California; New Orleans, Louisiana; and Washington, D.C.--were open to the public and included scientific presentations and reports, mostly from patients and their families, about their experiences with and perspectives on the medical use of marijuana. Scientific experts in various fields were selected to talk about the latest research on marijuana, cannabinoids, and related topics (listed in Appendix B). Selection of the experts was based on recommendations by their peers, who ranked them among the most accomplished scientists and the most knowledgeable about marijuana and cannabinoids in their own fields. In addition, advocates for (John Morgan) and against (Eric A. Voth) the medical use of marijuana were invited to present scientific evidence in support of their positions.
Information presented at the scientific workshops was supplemented by analysis of the scientific literature and evaluating the methods used in various studies and the validity of the authors' conclusions. Different kinds of clinical studies are useful in different ways: results of a controlled double-blind study with adequate sample sizes can be expected to apply to the general population from which study subjects were drawn; an isolated case report can suggest further studies but cannot be presumed to be broadly applicable; and survey data can be highly informative but are generally limited by the need to rely on self-reports of drug use and on unconfirmed medical diagnoses. This report relies mainly on the most relevant and methodologically rigorous studies available and treats the results
of more limited studies cautiously. In addition, study results are presented in such a way as to allow thoughtful readers to judge the results themselves.
The Institute of Medicine (IOM) appointed a panel of nine experts to advise the study team on technical issues. These included neurology and the treatment of pain (Howard Fields); regulation of prescription drugs (J. Richard Crout); AIDS wasting and clinical trials (Judith Feinberg); treatment and pathology of multiple sclerosis (Timothy Vollmer); drug dependence among adolescents (Thomas Crowley); varieties of drug dependence (Dorothy Hatsukami); internal medicine, health care delivery, and clinical epidemiology (Eric B. Larson); cannabinoids and marijuana pharmacology (Billy R. Martin); and cannabinoid neuroscience (Steven R. Childers).
Public outreach included setting up a Web site that provided information about the study and asked for input from the public. The Web site was open for comment from November 1997 until November 1998. Some 130 organizations were invited to participate in the public workshops. Many people in the organizations--particularly those opposed to the medical use of marijuana--felt that a public forum was not conducive to expressing their views; they were invited to communicate their opinions (and reasons for holding them) by mail or telephone. As a result, roughly equal numbers of persons and organizations opposed to and in favor of the medical use of marijuana were heard from.
The study team visited four cannabis buyers' clubs in California (the Oakland
Cannabis Buyers' Cooperative, the San Francisco Cannabis Cultivators Club, the Los
Angeles Cannabis Resource Center, and Californians Helping Alleviate Medical
Problems, or CHAMPS) and two HIV/AIDS clinics (AIDS Health Care Foundation in
Los Angeles and Louisiana State University Medical Center in New Orleans). We
listened to many individual stories from the buyers' clubs about using marijuana to treat a
variety of symptoms and heard clinical observations on the use of Marinol to treat AIDS
9
patients. Marinol is the brand name for dronabinol, which is
(THC) in pill form and is available by prescription for the treatment of nausea associated with chemotherapy and AIDS wasting.
MARIJUANA TODAY
The Changing Legal Landscape
In the 20th century, marijuana has been used more for its euphoric effects than as a
medicine. Its psychological and behavioral effects have concerned public officials since
the drug first appeared in the southwestern and southern states during the first two
decades of the century. By 1931, at least 29 states had prohibited use of the drug for
3
nonmedicalpurposes. MarijuanawasfirstregulatedatthefederallevelbytheMarijuana Tax Act of 1937, which required anyone producing, distributing, or using marijuana for medical purposes to register and pay a tax and which effectively prohibited nonmedical use of the drug. Although the act did not make medical use of marijuana illegal, it did make it expensive and inconvenient. In 1942, marijuana was removed from the U.S.
-tetrahydrocannabinol
Pharmacopoeia because it was believed to be a harmful and addictive drug that caused psychoses, mental deterioration, and violent behavior.
In the late 1960s and early 1970s, there was a sharp increase in marijuana use among
adolescents and young adults. The current legal status of marijuana was established in
1970 with the passage of the Controlled Substances Act, which divided drugs into five
schedules and placed marijuana in Schedule I, the category for drugs with high potential
for abuse and no accepted medical use (see Appendix C, Scheduling Definitions). In
1972, the National Organization for the Reform of Marijuana Legislation (NORML), an
organization that supports decriminalization of marijuana, unsuccessfully petitioned the
Bureau of Narcotics and Dangerous Drugs to move marijuana from Schedule I to
Schedule II. NORML argued that marijuana is therapeutic in numerous serious ailments,
13
less toxic, and in many cases more effective than conventional medicines.
years the medical marijuana movement has been closely linked with the marijuana decriminalization movement, which has colored the debate. Many people criticized that association in their letters to IOM and during the public workshops of this study. The argument against the medical use of marijuana presented most often to the IOM study team was that "the medical marijuana movement is a Trojan horse"; that is, it is a deceptive tactic used by advocates of marijuana decriminalization who would exploit the public's sympathy for seriously ill patients.
Since NORML's petition in 1972, there have been a variety of legal decisions
concerning marijuana. From 1973 to 1978, 11 states adopted statutes that decriminalized
use of marijuana, although some of them recriminalized marijuana use in the 1980s and
1990s. During the 1970s, reports of the medical value of marijuana began to appear,
particularly claims that marijuana relieved the nausea associated with chemotherapy.
Health departments in six states conducted small studies to investigate the reports. When
the AIDS epidemic spread in the 1980s, patients found that marijuana sometimes relieved
their symptoms, most dramatically those associated with AIDS wasting. Over this period
a number of defendants charged with unlawful possession of marijuana claimed that they
were using the drug to treat medical conditions and that violation of the law was therefore
justified (the so-called medical necessity defense). Although most courts rejected these
8
Against that backdrop, voters in California and Arizona in 1996 passed two referenda
that attempted to legalize the medical use of marijuana under particular conditions. Public
support for patient access to marijuana for medical use appears substantial; public
opinion polls taken during 1997 and 1998 generally reported 60—70 percent of
15
However, those referenda are at odds with federal laws regulating marijuana, and their implementation raises
complex legal questions.
Despite the current level of interest, referenda and public discussions have not been well informed by carefully reasoned scientific debate. Although previous reports have all called for more research, the nature of the research that will be most helpful depends greatly on the specific health conditions to be addressed. And while there have been
claims, some accepted them.
respondents in favor of allowing medical uses of marijuana.
Thus, for 25
important recent advances in our understanding of the physiological effects of marijuana, few of the recent investigators have had the time or resources to permit detailed analysis. The results of those advances, only now beginning to be explored, have significant implications for the medical marijuana debate.
Several months after the passage of the California and Arizona medical marijuana referendums, the Office of National Drug Control Policy (ONDCP) asked whether IOM would conduct a scientific review of the medical value of marijuana and its constituent compounds. In August 1997, IOM formally began the study and appointed John A. Benson Jr. and Stanley J. Watson Jr. to serve as principal investigators for the study. The charge to IOM was to review the medical use of marijuana and the harms and benefits attributed to it (details are given in Appendix D).
Marijuana plants have been used since antiquity for both herbal medication and intoxication. The current debate over the medical use of marijuana is essentially a debate over the value of its medicinal properties relative to the risk posed by its use.
1,10,11
Marijuana's use as an herbal remedy before the 20th century is well documented.
However, modern medicine adheres to different standards from those used in the past.
The question is not whether marijuana can be used as an herbal remedy but rather how
well this remedy meets today's standards of efficacy and safety. We understand much
more than previous generations about medical risks. Our society generally expects its
licensed medications to be safe, reliable, and of proven efficacy; contaminants and
inconsistent ingredients in our health treatments are not tolerated. That refers not only to
prescription and over-the-counter drugs but also to vitamin supplements and herbal
remedies purchased at the grocery store. For example, the essential amino acid l-
tryptophan was widely sold in health food stores as a natural remedy for insomnia until
early 1990 when it became linked to an epidemic of a new and potentially fatal illness 9,12
When it was removed from the market shortly thereafter, there was little protest, despite the fact that it was safe for the vast majority of
the population. The 1,536 cases and 27 deaths were later traced to contaminants in a batch produced by a single Japanese manufacturer.
Although few herbal medicines meet today's standards, they have provided the foundation for modern Western pharmaceuticals. Most current prescriptions have their
7
rootseitherdirectlyorindirectlyinplantremedies. Atthesametime,mostcurrent prescriptions are synthetic compounds that are only distantly related to the natural compounds that led to their development. Digitalis was discovered in foxglove, morphine in poppies, and taxol in the yew tree. Even aspirin (acetylsalicylic acid) has its counterpart in herbal medicine: for many generations, American Indians relieved headaches by chewing the bark of the willow tree, which is rich in a related form of salicylic acid.
Although plants continue to be valuable resources for medical advances, drug development is likely to be less and less reliant on plants and more reliant on the tools of
(eosinophilia-myalgia syndrome).
modern science. Molecular biology, bioinformatics software, and DNA array-based analyses of genes and chemistry are all beginning to yield great advances in drug discovery and development. Until recently, drugs could only be discovered; now they can be designed. Even the discovery process has been accelerated through the use of modern drug-screening techniques. It is increasingly possible to identify or isolate the chemical compounds in a plant, determine which compounds are responsible for the plant's effects, and select the most effective and safe compounds--either for use as purified substances or as tools to develop even more effective, safer, or less expensive compounds.
Yet even as the modern pharmacological toolbox becomes more sophisticated and biotechnology yields an ever greater abundance of therapeutic drugs, people increasingly
4,5
In 1997, 46 percent of Americans sought
nontraditional medicines and spent over 27 billion unreimbursed dollars; the total number
of visits to alternative medicine practitioners appears to have exceeded the number of
5,6
Recent interest in the medical use of marijuana
coincides with this trend toward self-help and a search for "natural" therapies. Indeed,
several people who spoke at the IOM public hearings in support of the medical use of
marijuana said that they generally preferred herbal medicines to standard
pharmaceuticals. However, few alternative therapies have been carefully and
systematically tested for safety and efficacy, as is required for medications approved by
2
WHO USES MEDICAL MARIJUANA?
There have been no comprehensive surveys of the demographics and medical conditions of medical marijuana users, but a few reports provide some indication. In each case, survey results should be understood to reflect the situation in which they were conducted and are not necessarily characteristic of medical marijuana users as a whole. Respondents to surveys reported to the IOM study team were all members of "buyers' clubs," organizations that provide their members with marijuana, although not necessarily through direct cash transactions. The atmosphere of the marijuana buyers' clubs ranges from that of the comparatively formal and closely regulated Oakland Cannabis Buyers' Cooperative to that of a "country club for the indigent," as Denis Peron described the San Francisco Cannabis Cultivators Club (SFCCC), which he directed.
John Mendelson, an internist and pharmacologist at the University of California, San Francisco (UCSF) Pain Management Center, surveyed 100 members of the SFCCC who were using marijuana at least weekly. Most of the respondents were unemployed men in their forties. Subjects were paid $50 to participate in the survey; this might have encouraged a greater representation of unemployed subjects. All subjects were tested for drug use. About half tested positive for marijuana only; the other half tested positive for drugs in addition to marijuana (23% for cocaine and 13% for amphetamines). The predominant disorder was AIDS, followed by roughly equal numbers of members who reported chronic pain, mood disorders, and musculoskeletal disorders (Table 1.1).
seek alternative, low-technology therapies.
visits to primary care physicians.
the FDA (Food and Drug Administration).
The membership profile of the San Francisco club was similar to that of the Los Angeles Cannabis Resource Center (LACRC), where 83% of the 739 patients were men, 45% were 36—45 years old, and 71% were HIV positive. Table 1.2 shows a distribution of conditions somewhat different from that in SFCCC respondents, probably because of a different membership profile. For example, cancer is generally a disease that occurs late in life; 34 (4.7%) of LACRC members were over 55 years old; only 2% of survey respondents in the SFCCC study were over 55 years old.
Jeffrey Jones, executive director of the Oakland Cannabis Buyers' Cooperative, reported that its largest group of patients is HIV-positive men in their forties. The second- largest group is patients with chronic pain.
Among the 42 people who spoke at the public workshops or wrote to the study team, only six identified themselves as members of marijuana buyers' clubs. Nonetheless, they presented a similar profile: HIV/AIDS was the predominant disorder, followed by chronic pain (Tables 1.3 and 1.4). All HIV/AIDS patients reported that marijuana relieved nausea and vomiting and improved their appetite. About half the patients who reported using marijuana for chronic pain also reported that it reduced nausea and vomiting.
Note that the medical conditions referred to are only those reported to the study team or to interviewers; they cannot be assumed to represent complete or accurate diagnoses. Michael Rowbotham, a neurologist at the UCSF Pain Management Center, noted that many pain patients referred to that center arrive with incorrect diagnoses or with pain of unknown origin. At that center the patients who report medical benefit from marijuana say that it does not reduce their pain but enables them to cope with it.
Most--not all--people who use marijuana to relieve medical conditions have previously used it recreationally. An estimated 95% of the LACRC members had used marijuana before joining the club. It is important to emphasize the absence of comprehensive information on marijuana use before its use for medical conditions. Frequency of prior use almost certainly depends on many factors, including membership in a buyers' club, membership in a population sector that uses marijuana more often than others (for example, men 20—30 years old), and the medical condition being treated with marijuana (for example, there are probably relatively fewer recreational marijuana users among cancer patients than among AIDS patients).
Patients who reported their experience with marijuana at the public workshops said that marijuana provided them with great relief from symptoms associated with disparate diseases and ailments, including AIDS wasting, spasticity from multiple sclerosis, depression, chronic pain, and nausea associated with chemotherapy. Their circumstances and symptoms were varied, and the IOM study team was not in a position to make medical evaluations or confirm diagnoses. Three representative cases presented to the IOM study team are presented in Box 1.1; the stories have been edited for brevity, but each case is presented in the patient's words and with the patient's permission.
The variety of stories presented left the study team with a clear view of people's beliefs about how marijuana had helped them. But this collection of anecdotal data, although useful, is limited. We heard many positive stories but no stories from people who had tried marijuana but found it ineffective. This is a fraction with an unknown denominator. For the numerator we have a sample of positive responses; for the denominator we have no idea of the total number of people who have tried marijuana for medical purposes. Hence, it is impossible to estimate the clinical value of marijuana or cannabinoids in the general population based on anecdotal reports. Marijuana clearly seems to relieve some symptoms for some people--even if only as a placebo effect. But what is the balance of harmful and beneficial effects? That is the essential medical question that can be answered only by careful analysis of data collected under controlled conditions.
CANNABIS AND THE CANNABINOIDS
Marijuana is the common name for Cannabis sativa, a hemp plant that grows throughout temperate and tropical climates. The most recent review of the constituents of
16
marijuana lists 66 cannabinoids (Table 1.5).
different cannabinoid effects or interactions. Most of the cannabinoids are closely related; they fall into only 10 groups of closely related cannabinoids, many of which differ by only a single chemical moiety and might be midpoints along biochemical pathways--that
-tetrahydrocannabinol ( THC) is the primary psychoactive ingredient; depending on the particular plant, either
THC or cannabidiol is the most abundant cannabinoid in marijuana (Figure 1.1). 9
is, degradation products, precursors, or byproducts.
-
But that does not mean there are 66
16,18 9
9
Throughout this report, THC is used to indicate
of THC are discussed, the full names are used. All the cannabinoids are lipophilic--they are highly soluble in fatty fluids and tissues but not in water. Indeed, THC is so lipophilic that it is aptly described as "greasy."
Throughout this report, marijuana refers to unpurified plant extracts, including leaves and flower tops, regardless of how they are consumed--whether by ingestion or by smoking. References to the effects of marijuana should be understood to include the composite effects of its various components; that is, the effects of THC are included among the effects of marijuana, but not all the effects of marijuana are necessarily due to THC. Discussions concerning cannabinoids refer only to those particular compounds and not to the plant extract. This distinction is important; it is often blurred or exaggerated.
Cannabinoids are produced in epidermal glands on the leaves (especially the upper
ones), stems, and the bracts that support the flowers of the marijuana plant. Although the
flower itself has no epidermal glands, it has the highest cannabinoid content anywhere on
the plant, probably because of the accumulation of resin secreted by the supporting
bracteole (the small leaf-like part below the flower). The amounts of cannabinoids and
their relative abundance in a marijuana plant vary with growing conditions, including
14
). The chemical stability of cannabinoids in harvested plant material is also affected by moisture,
temperature, sunlight, and storage. They degrade under any storage condition.
humidity, temperature, and soil nutrients (reviewed in Pate, 1994
-THC. In the few cases where variants
ORGANIZATION OF THE REPORT
Throughout the report, steps that might be taken to fill the gaps in understanding both the potential harms and benefits of marijuana and cannabinoid use are identified. Those steps include identifying knowledge gaps, promising research directions, and potential therapies based on scientific advances in cannabinoid biology.
Chapter 2 reviews basic cannabinoid biology and provides a foundation to understand the medical value of marijuana or its constituent cannabinoids. In consideration of the physician's first rule, "first, do no harm," the potential harms attributed to the medical use of marijuana are reviewed before the potential medical benefits. Chapter 3 reviews the risks posed by marijuana use, with emphasis on medical use.
Chapter 4 analyzes the most credible clinical data relevant to the medical use of marijuana. It reviews what is known about the physiological mechanisms underlying particular conditions (for example, chronic pain, vomiting, anorexia, and muscle spasticity), what is known about the cellular actions of cannabinoids, and the levels of proof needed to show that marijuana is an effective treatment for specific symptoms. It does not analyze the historical literature; history is informative in enumerating uses of marijuana, but it does not provide the sort of information needed for a scientifically sound evaluation of the efficacy and safety of marijuana for clinical use. Because marijuana is advocated primarily as affording relief from the symptoms of disease rather than as a cure, this chapter is organized largely by symptoms as opposed to disease categories. Finally, chapter 4 compares the conclusions of this report with those of other recent reports on the medical use of marijuana.
Chapter 5 describes the process of and analyzes the prospects for cannabinoid drug development.
Primum non nocere. This is the physician's first rule: whatever treatment a physician prescribes to a patient--first, that treatment must not harm the patient.
The most contentious aspect of the medical marijuana debate is not whether marijuana can alleviate particular symptoms but rather the degree of harm associated with its use. This chapter explores the negative health consequences of marijuana use, first with respect to drug abuse, then from a psychological perspective, and finally from a physiological perspective.
THE MARIJUANA "HIGH"
The most commonly reported effects of smoked marijuana are a sense of well-being or
euphoria and increased talkativeness and laughter alternating with periods of
introspective dreaminess followed by lethargy and sleepiness (see reviews by Adams and
1 59 60
Martin, 1996, Hall and Solowij, and Hall et al. ). A characteristic feature of a marijuana "high" is a distortion in the sense of time associated with deficits in short-term memory and learning. A marijuana smoker typically has a sense of enhanced physical and emotional sensitivity, including a feeling of greater interpersonal closeness. The most obvious behavioral abnormality displayed by someone under the influence of marijuana is difficulty in carrying on an intelligible conversation, perhaps because of an inability to remember what was just said even a few words earlier.
The high associated with marijuana is not generally claimed to be integral to its therapeutic value. But mood enhancement, anxiety reduction, and mild sedation can be desirable qualities in medications--particularly for patients suffering pain and anxiety. Thus, although the psychological effects of marijuana are merely side effects in the treatment of some symptoms, they might contribute directly to relief of other symptoms. They also must be monitored in controlled clinical trials to discern which effect of cannabinoids is beneficial. These possibilities are discussed later under the discussions of specific symptoms in chapter 4.
The effects of various doses and routes of delivery of THC are shown in Table 3.1.
Adverse Mood Reactions
Although euphoria is the more common reaction to smoking marijuana, adverse mood
reactions can occur. Such reactions occur most frequently in inexperienced users after
large doses of smoked or oral marijuana. They usually disappear within hours and
respond well to reassurance and a supportive environment. Anxiety and paranoia are the
59
most common acute adverse reactions;
others include panic, depression, dysphoria, 1,40,66,69
depersonalization, delusions, illusions, and hallucinations.
Of regular marijuana
smokers, 17% report that they have experienced at least one of the symptoms, usually
145
early in their use of marijuana.
of medical marijuana in people who have not previously used marijuana.
DRUG DYNAMICS
There are many misunderstandings about drug abuse and dependence (see reviews by
114
54
Those observations are particularly relevant for the use
O'Brien themostrecentDiagnosticandStatisticalManualofMentalDisorders(DSM-IV), the most influential system in the United States for diagnoses of mental disorders, including substance abuse (see Box 3.1). Tolerance, dependence, and withdrawal are often presumed to imply abuse or addiction, but this is not the case. Tolerance and dependence are normal physiological adaptations to repeated use of any drug. The correct use of prescribed medications for pain, anxiety, and even hypertension commonly produces tolerance and some measure of physiological dependence.
Even a patient who takes a medicine for appropriate medical indications and at the correct dosage can develop tolerance, physical dependence, and withdrawal symptoms if the drug is stopped abruptly rather than gradually. For example, a hypertensive patient receiving a beta-adrenergic receptor blocker, such as propranolol, might have a good therapeutic response; but if the drug is stopped abruptly, there can be a withdrawal syndrome that consists of tachycardia and a rebound increase in blood pressure to a point that is temporarily higher than before administration of the medication began.
Because it is an illegal substance, some people consider any use of marijuana as
substance abuse. However, this report uses the medical definition; that is, substance
abuse is a maladaptive pattern of repeated substance use manifested by recurrent and
3
significantadverseconsequences. Substanceabuseanddependencearebothdiagnoses of pathological substance use. Dependence is the more serious diagnosis and implies compulsive drug use that is difficult to stop despite significant substance-related problems (see Box 3.2).
Reinforcement
Drugs vary in their ability to produce good feelings in users, and the more strongly
reinforcing a drug is, the more likely it will be abused (G. Koob, Institute of Medicine
(IOM) workshop). Marijuana is indisputably reinforcing for many people. The
reinforcing properties of even so mild a stimulant as caffeine are typical of reinforcement
54
in 1994). Caffeine is reinforcing for many people at low doses (100—200 mg, the average amount of caffeine in one to two cups of
and Goldstein
). The terms and concepts used in this report are as defined in
3
by addicting drugs (reviewed by Goldstein
coffee) and is aversive at high doses (600 mg, the average amount of caffeine in six cups of coffee). The reinforcing effects of many drugs are different for different people. For example, caffeine was most reinforcing for test subjects who scored lowest on tests of anxiety but tended not to be reinforcing for the most anxious subjects.
As an argument to dispute the abuse potential of marijuana, some have cited the
observation that animals do not willingly self-administer THC, as they will cocaine. Even
if that were true, it would not be relevant to human use of marijuana. The value in animal
models of drug self-administration is not that they are necessary to show that a drug is
reinforcing but rather that they provide a model in which the effects of a drug can be
studied. Furthermore, THC is indeed rewarding to animals at some doses but, like many
93
reinforcing drugs, is aversive at high doses (4.0 mg/kg).
in experiments conducted in animals outfitted with intravenous catheters that allow them
100
A specific set of neural pathways has been proposed to be a "reward system" that
51
to self-administer WIN 55,212, a drug that mimics the effects of THC.
underlies the reinforcement of drugs of abuse and other pleasurable stimuli.
properties of drugs are associated with their ability to increase concentrations of particular neurotransmitters in areas that are part of the proposed brain reward system. The median forebrain bundle and the nucleus accumbens are associated with brain reward
88
144
Cocaine, amphetamine, alcohol, opioids, nicotine, and THC
extracellular fluid dopamine in the nucleus accumbens region (reviewed by Koob and Le
pathways.
all increase
88
110
Moal
brain reward systems are not strictly "drug reinforcement centers." Rather, their biological role is to respond to a range of positive stimuli, including sweet foods and sexual attraction.
Tolerance
The rate at which tolerance to the various effects of any drug develops is an important consideration for its safety and efficacy. For medical use, tolerance to some effects of cannabinoids might be desirable. Differences in the rates at which tolerance to the multiple effects of a drug develops can be dangerous. For example, tolerance to the euphoric effects of heroin develops faster than tolerance to its respiratory depressant effects, so heroin users tend to increase their daily doses to reach their desired level of euphoria, thereby putting themselves at risk for respiratory arrest. Because tolerance to the various effects of cannabinoids might develop at different rates, it is important to evaluate independently their effects on mood, motor performance, memory, and attention, as well as any therapeutic use under investigation.
Tolerance to most of the effects of marijuana can develop rapidly after only a few
doses, and it also disappears rapidly. Tolerance to large doses has been found to persist in
experimental animals for long periods after cessation of drug use. Performance
impairment is less among people who use marijuana heavily than it is among those who 29,104,124
and Nestler and Aghajanian
in 1997). However, it is important to note that
possibly because of tolerance. Heavy users tend to reach higher plasma concentrations of THC than light users after similar doses of
use marijuana only occasionally,
Similar effects have been found
Reinforcing
THC, arguing against the possibility that heavy users show less performance impairment because they somehow absorb less THC (perhaps due to differences in smoking
95
There appear to be variations in the development of tolerance to the different effects
of marijuana and oral THC. For example, daily marijuana smokers participated in a
residential laboratory study to compare the development of tolerance to THC pills and to 61,62
behavior).
smoked marijuana.
day for four consecutive days; another group was given THC pills on the same schedule. During the four-day period, both groups became tolerant to feeling "high" and what they reported as a "good drug effect." In contrast, neither group became tolerant to the stimulatory effects of marijuana or THC on appetite. "Tolerance" does not mean that the drug no longer produced the effects but simply that the effects were less at the end than at the beginning of the four-day period. The marijuana smoking group reported feeling "mellow" after smoking and did not show tolerance to this effect; the group that took THC pills did not report feeling "mellow." The difference was also reported by many people who described their experiences to the IOM study team.
The oral and smoked doses were designed to deliver roughly equivalent amounts of THC to a subject. Each smoked marijuana dose consisted of five 10-second puffs of a marijuana cigarette containing 3.1% THC; the pills contained 30 mg of THC. Both groups also received placebo drugs during other four-day periods. Although the dosing of the two groups was comparable, different routes of administration resulted in different patterns of drug effect. The peak effect of smoked marijuana is usually felt within
68,95
One group was given marijuana cigarettes to smoke four times per
minutes and declines sharply after 30 minutes
not felt until about an hour and lasts for several hours.
Withdrawal
A distinctive marijuana and THC withdrawal syndrome has been identified, but it is mild and subtle compared with the profound physical syndrome of alcohol or heroin
withdrawal.
The symptoms of marijuana withdrawal include restlessness, irritability,
31,74
mild agitation, insomnia, sleep EEG disturbance, nausea, and cramping (Table 3.2). In
addition to those symptoms, two recent studies noted several more. A group of
adolescents under treatment for conduct disorders also reported fatigue and illusions or
hallucinations after marijuana abstinence (this study is discussed further in the section on
31
In a residential study of daily marijuana users, withdrawal symptoms included sweating and
62
A marijuana withdrawal syndrome, however, has been reported only in a group of adolescents in treatment for substance
31
; the peak effect of oral THC is usually
118
"Prevalence and Predictors of Dependence on Marijuana and Other Drugs").
runny nose, in addition to those listed above.
abuse problems 62,74
daily.
and in a research setting where subjects were given marijuana or THC
Withdrawal symptoms have been observed in carefully controlled laboratory studies 61,62
of people after use of both oral THC and smoked marijuana.
were given very high doses of oral THC: 180—210 mg per day for 10—20 days, roughly
In one study, subjects
equivalent to smoking 9—10 2% THC cigarettes per day.
During the abstinence period
at the end of the study, the study subjects were irritable and showed insomnia, runny
nose, sweating, and decreased appetite. The withdrawal symptoms, however, were short
lived. In four days they had abated. The time course contrasts with that in another study
in which lower doses of oral THC were used (80—120 mg/day for four days) and
61,62
withdrawal symptoms were still near maximal after four days.
In animals, simply discontinuing chronic heavy dosing of THC does not reveal
withdrawal symptoms, but the "removal" of THC from the brain can be made abrupt by
another drug that blocks THC at its receptor if administered when the chronic THC is
withdrawn. The withdrawal syndrome is pronounced, and the behavior of the animals
153
becomes hyperactive and disorganized. 16,24
The half-life of THC in brain is about an Although traces of THC can remain in the brain for much longer periods, the
hour.
amounts are not physiologically significant. Thus, the lack of a withdrawal syndrome when THC is abruptly withdrawn without administration of a receptor-blocking drug is probably not due to a prolonged decline in brain concentrations.
Craving
Craving, the intense desire for a drug, is the most difficult aspect of addiction to overcome. Research on craving has focused on nicotine, alcohol, cocaine, and opiates but
115
has not specifically addressed marijuana.
is known about drug craving, its relevance to marijuana use has not been established.
Most people who suffer from addiction relapse within a year of abstinence, and they
58
As addiction develops, craving increases even as maladaptive consequences accumulate. Animal studies indicate that the tendency to
relapse is based on changes in brain function that continue for months or years after the
115
last use of the drug.
manifestation of an abstinence syndrome remains an unanswered question in drug abuse
88
The "liking" of sweet foods, for example, is mediated by opioid forebrain systems and by brain stem systems, whereas "wanting" seems to be mediated by
109
Anticraving medications have been developed for nicotine and alcohol. The antidepressant, bupropion, blocks nicotine craving, while naltrexone blocks alcohol
115
often attribute their relapse to craving.
research.
ascending dopamine neurons that project to the nucleus accumbens.
Another category of addiction medication includes drugs that block other drugs' effects. Some of those drugs also block craving. For example, methadone blocks the euphoric effects of heroin and also reduces craving.
MARIJUANA USE AND DEPENDENCE
Prevalence of Use
Millions of Americans have tried marijuana, but most are not regular users. In 1996, 68.6 million people--32% of the U.S. population over 12 years old--had tried marijuana
craving.
74
Thus, while this section briefly reviews what
Whether neurobiological conditions change during the
132
or hashish at least once in their lifetime, but only 5% were current users.
is most prevalent among 18- to 25-year-olds and declines sharply after the age of 34
77,132
(Figure 3.1).
although the difference decreases by adulthood.
Whites are more likely than blacks to use marijuana in adolescence,
132
Marijuana use
Most people who have used marijuana did so first during adolescence. Social influences, such as peer pressure and prevalence of use by peers, are highly predictive of
9
initiationintomarijuanause. Initiationisnot,ofcourse,synonymouswithcontinuedor
regular use. A cohort of 456 students who experimented with marijuana during their high
school years were surveyed about their reasons for initiating, continuing, and stopping
9
theirmarijuanause. Studentswhobeganasheavyuserswereexcludedfromthe
analysis. Those who did not become regular marijuana users cited two types of reasons
for discontinuing. The first was related to health and well-being; that is, they felt that
marijuana was bad for their health or for their family and work relationships. The second
type was based on age-related changes in circumstances, including increased
responsibility and decreased regular contact with other marijuana users. Among high
school students who quit, parental disapproval was a stronger influence than peer
disapproval in discontinuing marijuana use. In the initiation of marijuana use, the reverse
was true. The reasons cited by those who continued to use marijuana were to "get in a
better mood or feel better." Social factors were not a significant predictor of continued
use. Data on young adults show similar trends. Those who use drugs in response to social
influences are more likely to stop using them than those who also use them for
80
The age distribution of marijuana users among the general population contrasts with that of medical marijuana users. Marijuana use generally declines sharply after the age of 34 years, whereas medical marijuana users tend to be over 35. That raises the question of what, if any, relationship exists between abuse and medical use of marijuana; however, no studies reported in the scientific literature have addressed this question.
Prevalence and Predictors of Dependence on Marijuana and Other Drugs
Many factors influence the likelihood that a particular person will become a drug abuser or an addict; the user, the environment, and the drug are all important factors
114
(Table 3.3).
people who are vulnerable to drug abuse for individual reasons and who find themselves in an environment that encourages drug abuse are initially likely to abuse the most readily available drug--regardless of its unique set of effects on the brain.
The third category includes drug-specific effects that influence the abuse liability of a particular drug. As discussed earlier in this chapter, the more strongly reinforcing a drug is, the more likely that it will be abused. The abuse liability of a drug is enhanced by how quickly its effects are felt, and this is determined by how the drug is delivered. In general, the effects of drugs that are inhaled or injected are felt within minutes, and the effects of drugs that are ingested take a half hour or more.
psychological reasons.
The first two categories apply to potential abuse of any substance; that is,
The proportion of people who become addicted varies among drugs. Table 3.4 shows estimates for the proportion of people among the general population who used or became dependent on different types of drugs. The proportion of users that ever became dependent includes anyone who was ever dependent--whether it was for a period of weeks or years--and thus includes more than those who are currently dependent. Compared to most other drugs listed in this table, dependence among marijuana users is relatively rare. This might be due to differences in specific drug effects, the availability of or penalties associated with the use of the different drugs, or some combination.
Daily use of most illicit drugs is extremely rare in the general population. In 1989, daily use of marijuana among high school seniors was less than that of alcohol (2.9% and
76
Drug dependence is more prevalent in some sectors of the population than in others.
8
Age,gender,andraceorethnicgroupareallimportant. Excludingtobaccoandalcohol,
8
thefollowingtrendsofdrugdependencearestatisticallysignificant: Menare1.6times as likely than women to become drug dependent, non-Hispanic whites are about twice as likely as blacks to become drug dependent (the difference between non-Hispanic and Hispanic whites was not significant), and people 25—44 years old are more than three times as likely as those over 45 years old to become drug dependent.
More often than not, drug dependence co-occurs with other psychiatric disorders. Most people with a diagnosis of drug dependence disorder also have a diagnosis of a
76
The most frequent co- occurring disorder is alcohol abuse; 60% of men and 30% of women with a diagnosis of
drug dependence also abuse alcohol. In women who are drug dependent, phobic disorders and major depression are almost equally common (29% and 28%, respectively). Note that this study distinguished only between alcohol, nicotine and "other drugs"; marijuana was grouped among "other drugs." The frequency with which drug dependence and other psychiatric disorders co-occur might not be the same for marijuana and other drugs that were included in that category.
A strong association between drug dependence and antisocial personality or its precursor, conduct disorder, is also widely reported in children and adults (reviewed in
126
). Although the causes of the association are uncertain, Robins recently concluded that it is more likely that conduct disorders generally lead to substance abuse
126
Such a trend might, however, depend on the age at which the conduct disorder is manifested.
A longitudinal study by Brooks and co-workers noted a significant relationship
between adolescent drug use and disruptive disorders in young adulthood; except for
earlier psychopathology, such as childhood conduct disorder, the drug use preceded the
18
In contrast with use of other illicit drugs and tobacco, moderate (less than once a week and more than once a month) to heavy marijuana use did not
predict anxiety or depressive disorders; but it was similar to those other drugs in predicting antisocial personality disorder. The rates of disruptive disorders increased with
4.2%, respectively).
another psychiatric disorder (76% of men and 65% of women).
1998 by Robins
than the reverse.
psychiatric disorders.
increased drug use. Thus, heavy drug use among adolescents can be a warning sign for later psychiatric disorders; whether it is an early manifestation of or a cause of those disorders remains to be determined.
Psychiatric disorders are more prevalent among adolescents who use drugs--including
79
alcohol and nicotine--than among those who do not.
Table 3.5 indicates that adolescent
boys who smoke cigarettes daily are about 10 times as likely to have a psychiatric
disorder diagnosis as those who do not smoke. However, the table does not compare
intensity of use among the different drug classes. Thus, although daily cigarette smoking
among adolescent boys is more strongly associated with psychiatric disorders than is any
use of illicit substances, it does not follow that this comparison is true for every amount
79
Few marijuana users become dependent on it (Table 3.4), but those who do encounter 19,143
of cigarette smoking.
problems similar to those associated with dependence on other drugs.
appears to be less severe among people who use only marijuana than among those who
19,143
abuse cocaine or those who abuse marijuana with other drugs (including alcohol).
Data gathered in 1990—1992 from the National Comorbidity Study of over 8,000 persons 15—54 years old indicate that 4.2% of the general population were dependent on
8
marijuanaatsometime. Similarresultsforthefrequencyofsubstanceabuseamongthe general population were obtained from the Epidemiological Catchment Area Program, a survey of over 19,000 people. According to data collected in the early 1980s for that study, 4.4% of adults have, at one time, met the criteria for marijuana dependence. In comparison, 13.8% of adults met the criteria for alcohol dependence and 36.0% for tobacco dependence. After alcohol and nicotine, marijuana was the substance most frequently associated with a diagnosis of substance dependence.
In a 15-year study begun in 1979, 7.3% of 1,201 adolescents and young adults in
suburban New Jersey at some time met the criteria for marijuana dependence; this
indicates that the rate of marijuana dependence might be even higher in some groups of
71
Adolescents meet the criteria for drug dependence at lower rates of marijuana use than do adults, and this
25
adolescents and young adults than in the general population.
suggests that they are more vulnerable to dependence than adults
(see Box 3.2).
Dependence
Youths who are already dependent on other substances are particularly vulnerable to
31
marijuana dependence. For example, Crowley and co-workers
229 adolescent patients in a residential treatment program for delinquent, substance- involved youth and found that those patients were dependent on an average of 3.2 substances. The adolescents had previously been diagnosed as dependent on at least one substance (including nicotine and alcohol) and had three or more conduct disorder symptoms during their life. About 83% of those who had used marijuana at least six times went on to develop marijuana dependence. About equal numbers of youths in the study had a diagnosis of marijuana dependence and a diagnosis of alcohol dependence; fewer were nicotine dependent. Comparisons of dependence potential between different drugs should be made cautiously. The probability that a particular drug will be abused is
interviewed a group of
influenced by many factors, including the specific drug effects and availability of the drug.
Although parents often state that marijuana caused their children to be rebellious, the
troubled adolescents in the study by Crowley and co-workers developed conduct
disorders before marijuana abuse. That is consistent with reports that the more symptoms 127
of conduct disorders children have, the younger they begin drug abuse, earlier they begin drug use, the more likely it is to be followed by abuse or
125
Genetic factors are known to play a role in the likelihood of abuse for drugs other than 7,129
dependence.
marijuana,
and it is not unexpected that genetic factors play a role in the marijuana
experience, including the likelihood of abuse. A study of over 8,000 male twins listed in
the Vietnam Era Twin Registry indicated that genes have a statistically significant
97
influence on whether a person finds the effects of marijuana pleasant.
Not surprisingly,
people who found marijuana to be pleasurable used it more often than those who found it
unpleasant. The study suggested that, although social influences play an important role in
the initiation of use, individual differences--perhaps associated with the brain's reward
system--influence whether a person will continue using marijuana. Similar results were
86
Family and social environment strongly influenced the likelihood of ever using marijuana but had little effect on the likelihood of heavy use or
abuse. The latter were more influenced by genetic factors. Those results are consistent with the finding that the degree to which rats find THC rewarding is genetically based.
In summary, although few marijuana users develop dependence, some do. But they appear to be less likely to do so than users of other drugs (including alcohol and nicotine), and marijuana dependence appears to be less severe than dependence on other drugs. Drug dependence is more prevalent in some sectors of the population than others, but no group has been identified as particularly vulnerable to the drug-specific effects of marijuana. Adolescents, especially troubled ones, and people with psychiatric disorders (including substance abuse) appear to be more likely than the general population to become dependent on marijuana.
If marijuana or cannabinoid drugs were approved for therapeutic uses, it would be important to consider the possibility of dependence, particularly for patients at high risk for substance dependence. Some controlled substances that are approved medications produce dependence after long-term use; this, however, is a normal part of patient management and does not generally present undue risk to the patient.
Progression from Marijuana to Other Drugs
The fear that marijuana use might cause, as opposed to merely precede, the use of drugs that are more harmful is of great concern. To judge from comments submitted to the IOM study team, it appears to be of greater concern than the harms directly related to marijuana itself. The discussion that marijuana is a "gateway" drug implicitly recognizes that other illicit drugs might inflict greater damage to health or social relations than
found in a study of female twins.
and that the
92
marijuana. Although the scientific literature generally discusses drug use progression between a variety of drug classes, including alcohol and tobacco, the public discussion has focused on marijuana as a "gateway" drug that leads to abuse of more harmful illicit drugs, such as cocaine and heroin.
There are strikingly regular patterns in the progression of drug use from adolescence
to adulthood. Because it is the most widely used illicit drug, marijuana is predictably the
first illicit drug that most people encounter. Not surprisingly, most users of other illicit 81,82
drugs used marijuana first.
marijuana--they begin with alcohol and nicotine, usually when they are too young to do
82,90
so legally.
The gateway analogy evokes two ideas that are often confused. The first, more often referred to as the "stepping stone" hypothesis, is the idea that progression from marijuana
82
In fact, most drug users do not begin their drug use with
to other drugs arises from pharmacological properties of marijuana itself.
that marijuana serves as a gateway to the world of illegal drugs in which youths have greater opportunity and are under greater social pressure to try other illegal drugs. The latter interpretation is most often used in the scientific literature, and it is supported, although not proven, by the available data.
The stepping stone hypothesis applies to marijuana only in the broadest sense. People who enjoy the effects of marijuana are, logically, more likely to be willing to try other mood-altering drugs than are people who are not willing to try marijuana or who dislike its effects. In other words, many of the factors associated with a willingness to use marijuana are, presumably, the same as those associated with a willingness to use other illicit drugs. Those factors include physiological reactions to the drug effect, which are consistent with the stepping stone hypothesis, but also psychosocial factors, which are independent of drug-specific effects. There is no evidence that marijuana serves as a stepping stone on the basis of its particular physiological effect. One might argue that marijuana is generally used before other illicit mood-altering drugs, in part, because its effects are milder; in that case, marijuana is a stepping stone only in the same sense as taking a small dose of a particular drug and then increasing that dose over time is a stepping stone to increased drug use.
Whereas the stepping stone hypothesis presumes a predominantly physiological
component of drug progression, the gateway theory is a social theory. The latter does not
suggest that the pharmacological qualities of marijuana make it a risk factor for
progression to other drug use. Instead, the legal status of marijuana makes it a gateway
82
Psychiatric disorders are associated with substance dependence and are probably risk factors for progression in drug use. For example, the troubled adolescents studied by
31
were dependent on an average of 3.2 substances, and this suggests that their conduct disorders were associated with increased risk of progressing
from one drug to another. Abuse of a single substance is probably also a risk factor for later multiple drug use. For example, in a longitudinal study that examined drug use and
drug.
Crowley and co-workers
The second is
dependence, about 26% of problem drinkers reported that they first used marijuana after the onset of alcohol-related problems (R. Pandina, IOM workshop). The study also found that 11% of marijuana users developed chronic marijuana problems; most also had alcohol problems.
Intensity of drug use is an important risk factor in progression. Daily marijuana users are more likely than their peers to be extensive users of other substances (for review, see
78
Kandel and Davies
by the age 24—25, 75% never used any other illicit drug; 53% of those who had used it
78
The factors that best predict use of illicit drugs other than marijuana are probably the
following: age of first alcohol or nicotine use, heavy marijuana use, and psychiatric
disorders. However, progression to illicit drug use is not synonymous with heavy or
persistent drug use. Indeed, although the age of onset of use of licit drugs (alcohol and
nicotine) predicts later illicit drug use, it does not appear to predict persistent or heavy 90
use of illicit drugs.
Data on the gateway phenomenon are often overinterpreted. For example, one study
55
). Of 34- to 35-year- old men who had used marijuana 10—99 times
more than 100 times did progress to using other illicit drugs 10 or more times. Comparable proportions for women are 64% and 50%.
reports that "marijuana's role as a gateway drug appears to have increased."
It was a
retrospective study based on interviews of drug abusers who reported smoking crack or
injecting heroin daily. The data from the study provide no indication of what proportion
of marijuana users become serious drug abusers; rather, they indicate that serious drug
abusers usually use marijuana before they smoke crack or inject heroin. Only a small
percentage of the adult population uses crack or heroin daily; during the five-year period
from 1993 to 1997, an average of three people per 1,000 used crack and about two per
132
Many of the data on which the gateway theory is based do not measure dependence;
instead, they measure use--even once-only use. Thus, they show only that marijuana
users are more likely to use other illicit drugs (even if only once) than are people who
never use marijuana, not that they become dependent or even frequent users. The authors
of these studies are careful to point out that their data should not be used as evidence of
an inexorable causal progression; rather they note that identifying stage-based user
groups makes it possible to identify the specific risk factors that predict movement from
25
In the sense that marijuana use typically precedes rather than follows initiation into the use of other illicit drugs, it is indeed a gateway drug. However, it does not appear to be a gateway drug to the extent that it is the cause or even that it is the most significant predictor of serious drug abuse; that is, care must be taken not to attribute cause to association. The most consistent predictors of serious drug use appear to be the intensity
of marijuana use and co-occurring psychiatric disorders or a family history of 78,83
psychopathology (including alcoholism).
1,000 used heroin in the preceding month.
one stage of drug use to the next--the real issue in the gateway discussion.
An important caution is that data on drug use progression pertain to nonmedical drug
use. It does not follow from those data that if marijuana were available by prescription for
medical use, the pattern of drug use would be the same. Kandel and co-workers also
included nonmedical use of prescription psychoactive drugs in their study of drug use
82
progression.
a clear and consistent sequence of drug use involving the abuse of prescription psychoactive drugs. The current data on drug use progression neither support nor refute the suggestion that medical availability would increase drug abuse among medical marijuana users. Whether the medical use of marijuana might encourage drug abuse among the general community--not among medical marijuana users themselves but among others simply because of the fact that marijuana would be used for medical purposes--is another question.
LINK BETWEEN MEDICAL USE AND DRUG ABUSE
Almost everyone who spoke or wrote to the IOM study team about the potential harms posed by the medical use of marijuana felt that it would send the wrong message to children and teenagers. They stated that information about the harms caused by marijuana is undermined by claims that marijuana might have medical value. Yet many of our powerful medicines are also dangerous medicines. These two facets of medicine-- effectiveness and risk--are inextricably linked.
The question here is not whether marijuana can be both harmful and helpful but whether the perception of its benefits will increase its abuse. For now any answer to the question remains conjecture. Because marijuana is not an approved medicine, there is little information about the consequences of its medical use in modern society. Reasonable inferences might be drawn from some examples. Opiates, such as morphine and codeine, are an example of a class of drugs that is both abused to great harm and used to great medical benefit, and it would be useful to examine the relationship between their medical use and their abuse. In a "natural experiment" during 1973—1978 some states decriminalized marijuana, and others did not. Finally, one can examine the short-term consequences of the publicity surrounding the 1996 medical marijuana campaign in California and ask whether it had any measurable impact on marijuana consumption among youth in California; the consequences of "message" that marijuana might have medical use are examined below.
Medical Use and Abuse of Opiates
Two highly influential papers published in the 1920s and 1950s led to widespread concern among physicians and medical licensing boards that liberal use of opiates would
106
in 1996). Such fears have proven unfounded; it is now recognized that fear of producing addicts through medical
treatment resulted in needless suffering among patients with pain as physicians 27,44
In contrast with the use of alcohol, nicotine, and illicit drugs, there was not
result in many addicts (reviewed by Moulin and co-workers
needlessly limited appropriate doses of medications.
addiction problems with misuse of drugs that have been prescribed for medical use.
Few people begin their drug
114
Opiates are carefully regulated in the medical setting, and diversion of medically prescribed opiates to the black market is not generally considered to be a major problem.
No evidence suggests that the use of opiates or cocaine for medical purposes has increased the perception that their illicit use is safe or acceptable. Clearly, there are risks that patients will abuse marijuana for its psychoactive effects and some likelihood of diversion of marijuana from legitimate medical channels into the illicit market. But those risks do not differentiate marijuana from many accepted medications that are abused by some patients or diverted from medical channels for nonmedical use. Medications with abuse potential are placed in Schedule II of the Controlled Substances Act, which brings them under stricter control, including quotas on the amount that can be legally manufactured (see chapter 5 for discussion of the Controlled Substances Act). That scheduling also signals to physicians that a drug has abuse potential and that they should monitor its use by patients who could be at risk for drug abuse.
Marijuana Decriminalization
Monitoring the Future, the annual survey of values and lifestyles of high school seniors, revealed that high school seniors in decriminalized states reported using no more
72
marijuana than did their counterparts in states where marijuana was not decriminalized. Another study reported somewhat conflicting evidence indicating that decriminalization
105
had increased marijuana use.
Network (DAWN), which has collected data on drug-related emergency room (ER) cases since 1975. There was a greater increase from 1975 to 1978 in the proportion of ER patients who had used marijuana in states that had decriminalized marijuana in 1975— 1976 than in states that had not decriminalized it (Table 3.6). Despite the greater increase among decriminalized states, the proportion of marijuana users among ER patients by 1978 was about equal in states that had and states that had not decriminalized marijuana. That is because the non-decriminalized states had higher rates of marijuana use before decriminalization. In contrast with marijuana use, rates of other illicit drug use among ER patients were substantially higher in states that did not decriminalize marijuana use. Thus, there are different possible reasons for the greater increase in marijuana use in the decriminalized states. On the one hand, decriminalization might have led to an increased use of marijuana (at least among people who sought health care in hospital ERs). On the other hand, the lack of decriminalization might have encouraged greater use of drugs that are even more dangerous than marijuana.
The differences between the results for high school seniors from the Monitoring the
Future study and the DAWN data are unclear, although the author of the latter study
suggests that the reasons might lie in limitations inherent in how the DAWN data are
105
In 1976, the Netherlands adopted a policy of toleration for possession of up to 30 g of marijuana. There was little change in marijuana use during the seven years after the policy change, which suggests that the change itself had little effect; however, in 1984, when Dutch "coffee shops" that sold marijuana commercially spread throughout
That study used data from the Drug Awareness Warning
collected.
Amsterdam, marijuana use began to increase.
continued to increase in the Netherlands at the same rate as in the United States and Norway--two countries that strictly forbid marijuana sale and possession. Furthermore, during this period, approximately equal percentages of American and Dutch 18 year olds used marijuana; Norwegian 18 year olds were about half as likely to have used marijuana. The authors of this study conclude that there is little evidence that the Dutch marijuana depenalization policy led to increased marijuana use, although they note that commercialization of marijuana might have contributed to its increased use. Thus, there is little evidence that decriminalization of marijuana use necessarily leads to a substantial increase in marijuana use.
The Medical Marijuana Debate
The most recent National Household Survey on Drug Abuse showed that among people 12—17 years old the perceived risk associated with smoking marijuana once or
132
(Perceived risk is
measured as the percentage of survey respondents who report that they "perceive great
risk of harm" in using a drug at a specified frequency.) At first glance, that might seem to
validate the fear that the medical marijuana debate of 1996--before passage of the
California medical marijuana referendum in November 1997--had sent a message that
marijuana use is safe. But a closer analysis of the data shows that Californian youth were
an exception to the national trend. In contrast to the national trend, the perceived risk of
1321
In summary, there is no evidence that the medical marijuana debate has altered adolescents'
132
PSYCHOLOGICAL HARMS
In assessing the relative risks and benefits related to the medical use of marijuana, the psychological effects of marijuana can be viewed both as unwanted side effects and as potentially desirable end points in medical treatment. However, the vast majority of research on the psychological effects of marijuana has been in the context of assessing the drug's intoxicating effects when it is used for nonmedical purposes. Thus, the literature does not directly address the effects of marijuana taken for medical purposes.
There are some important caveats to consider in attempting to extrapolate from the research mentioned above to the medical use of marijuana. The circumstances under which psychoactive drugs are taken are an important influence on their psychological effects. Furthermore, research protocols to study marijuana's psychological effects in most instances were required to use participants who already had experience with marijuana. People who might have had adverse reactions to marijuana either would choose not to participate in this type of study or would be screened out by the investigator. Therefore, the incidence of adverse reactions to marijuana that might occur in people with no marijuana experience cannot be estimated from such studies. A further complicating factor concerns the dose regimen used for laboratory studies. In most instances, laboratory research studies have looked at the effects of single doses of
twice a week had decreased significantly between 1996 and 1997.
marijuana use did not change among California youth between 1996 and 1997.
perceptions of the risks associated with marijuana use.
98
During the 1990s, marijuana use has
marijuana, which might be different from those observed when the drug is taken repeatedly for a chronic medical condition.
Nonetheless, laboratory studies are useful in suggesting what psychological functions might be studied when marijuana is evaluated for medical purposes. Results of laboratory studies indicate that acute and chronic marijuana use has pronounced effects on mood, psychomotor, and cognitive functions. These psychological domains should therefore be considered in assessing the relative risks and therapeutic benefits related to marijuana or cannabinoids for any medical condition.
Psychiatric Disorders
A major question remains as to whether marijuana can produce lasting mood disorders
52
or psychotic disorders, such as schizophrenia. Georgotas and Zeidenberg reported that
smoking 10—22 marijuana cigarettes per day was associated with a gradual waning of
the positive mood and social facilitating effects of marijuana and an increase in
irritability, social isolation, and paranoid thinking. Inasmuch as smoking one cigarette is 68,95,118
enough to make a person feel "high" for about 1—3 hours,
the subjects in that
study were taking very high doses of marijuana. Reports have described the development
of apathy, lowered motivation, and impaired educational performance in heavy marijuana
121,122
There are clinical reports of marijuana-induced psychosis-like states (schizophrenia-like,
112
depression, and/or mania) lasting for a week or more.
of the varied nature of the psychotic states induced by marijuana, there is no specific "marijuana psychosis." Rather, the marijuana experience might trigger latent
users who do not appear to be behaviorally impaired in other ways.
psychopathology of many types.
concluded that
disorder.
As noted earlier, drug abuse is common among people with psychiatric
66
60
More recently, Hall and colleagues
"there is reasonable evidence that heavy cannabis use, and perhaps acute use in sensitive
individuals, can produce an acute psychosis in which confusion, amnesia, delusions, hallucinations, anxiety, agitation and hypomanic symptoms predominate." Regardless of which of those interpretations is correct, the two reports agree that there is little evidence that marijuana alone produces a psychosis that persists after the period of intoxication.
Schizophrenia
The association between marijuana and schizophrenia is not well understood. The
scientific literature indicates general agreement that heavy marijuana use can precipitate
schizophrenic episodes but not that marijuana use can cause the underlying psychotic 59,96,151
disorders. Estimates of the prevalence of marijuana use among schizophrenics vary
considerably but are in general agreement that it is at least as great as that among the
general population.
35
Schizophrenics prefer the effects of marijuana to those of alcohol
134
134
and cocaine,
reasons for this are unknown, but it raises the possibility that schizophrenics might obtain some symptomatic relief from moderate marijuana use. But overall, compared with the general population, people with schizophrenia or with a family history of schizophrenia
which they seem to use less often than does the general population.
The
Hollister suggests that, because
are likely to be at greater risk for adverse psychiatric effects from the use of cannabinoids.
Cognition
As discussed earlier, acutely administered marijuana impairs cognition.
60,66,112
Positron emission tomography (PET) imaging allows investigators to measure the acute
effects of marijuana smoking on active brain function. Human volunteers who perform
auditory attention tasks before and after smoking a marijuana cigarette show impaired
performance while under the influence of marijuana; this is associated with substantial
reduction in blood flow to the temporal lobe of the brain, an area that is sensitive to such 116,117
tasks.
Marijuana smoking increases blood flow in other brain regions, such as the 101,155
frontal lobes and lateral cerebellum.
Earlier studies purporting to show structural
22
changes in the brains of heavy marijuana users
have not been replicated with more
sophisticated techniques.
28,89
14,122
Nevertheless, recent studies
marijuana users after a brief period (19—24 hours) of marijuana abstinence. Longer term
140
Although these studies have attempted to match heavy marijuana users with subjects of similar cognitive
abilities before exposure to marijuana use, the adequacy of this matching has been
133
cognitive deficits in heavy marijuana users have also been reported.
have found subtle defects in cognitive tasks in heavy
questioned.
reviewed in an article by Pope and colleagues.
are designed to differentiate between changes in brain function caused the effects of marijuana and by the illness for which marijuana is being given. AIDS dementia is an obvious example of this possible confusion. It is also important to determine whether repeated use of marijuana at therapeutic dosages produces any irreversible cognitive effects.
Psychomotor Performance
Marijuana administration has been reported to affect psychomotor performance on a
23
not only details the studies that have been done but also points out the inconsistencies among studies, the methodological
shortcomings of many studies, and the large individual differences among the studies
attributable to subject, situational, and methodological factors. Those factors must be
considered in studies of psychomotor performance when participants are involved in a
clinical trial of the efficacy of marijuana. The types of psychomotor functions that have
been shown to be disrupted by the acute administration of marijuana include body sway,
hand steadiness, rotary pursuit, driving and flying simulation, divided attention, sustained
attention, and the digit-symbol substitution test. A study of experienced airplane pilots
showed that even 24 hours after a single marijuana cigarette their performance on flight
163
Before the tests, however, they told the study investigators that they were sure their performance would be unaffected.
The complex methodological issues facing research in this area are well
number of tasks. The review by Chait and Pierri
simulator tests was impaired.
121
Care must be exercised so that studies
Cognitive impairments associated with acutely administered marijuana limit the activities that people would be able to do safely or productively. For example, no one under the influence of marijuana or THC should drive a vehicle or operate potentially dangerous equipment.
Amotivational Syndrome
One of the more controversial effects claimed for marijuana is the production of an
"amotivational syndrome." This syndrome is not a medical diagnosis, but it has been used
to describe young people who drop out of social activities and show little interest in
school, work, or other goal-directed activity. When heavy marijuana use accompanies
these symptoms, the drug is often cited as the cause, but no convincing data demonstrate
23
a causal relationship between marijuana smoking and these behavioral characteristics.
is not enough to observe that a chronic marijuana user lacks motivation. Instead, relevant personality traits and behavior of subjects must be assessed before and after the subject becomes a heavy marijuana user. Because such research can only be done on subjects who become heavy marijuana users on their own, a large population study--such as the Epidemiological Catchment Area study described earlier in this chapter--would be needed to shed light on the relationship between motivation and marijuana use. Even then, although a causal relationship between the two could, in theory, be dismissed by an epidemiological study, causality could not be proven. |
"================
<TEXT PASSAGE>
=======
[context document]
================
<QUESTION>
=======
[user request]
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." | In the reference text, three trends are stated. I would like you to summarize the central idea of each trend. Their respective implication should be included in the summary. Finally, in trend one, consumers in the US are expressing a desire to have other kinds of at-home kits, could you retrieve the test with the lowest percentage? | Five trends shaping the consumer health and wellness space in 2024
Fifty-eight percent of US respondents to our survey said they are prioritizing wellness more now than they did a year ago. The following five trends encompass their newly emerging priorities, as well as those that are consistent with our earlier research.
Trend one: Health at home
The COVID-19 pandemic made at-home testing kits a household item. As the pandemic has moved into its endemic phase, consumers are expressing greater interest in other kinds of at-home kits: 26 percent of US consumers are interested in testing for vitamin and mineral deficiencies at home, 24 percent for cold and flu symptoms, and 23 percent for cholesterol levels.
At-home diagnostic tests are appealing to consumers because they offer greater convenience than going to a doctor’s office, quick results, and the ability to test frequently. In China, 35 percent of consumers reported that they had even replaced some in-person healthcare appointments with at-home diagnostic tests—a higher share than in the United States or the United Kingdom.
Although there is growing interest in the space, some consumers express hesitancy. In the United States and the United Kingdom, top barriers to adoption include the preference to see a doctor in person, a perceived lack of need, and price; in China, test accuracy is a concern for approximately 30 percent of consumers.
Implications for companies: Companies can address three critical considerations to help ensure success in this category. First, companies will want to determine the right price value equation for at-home diagnostic kits since cost still presents a major barrier for many consumers today. Second, companies should consider creating consumer feedback loops, encouraging users to take action based on their test results and then test again to assess the impact of those interventions. Third, companies that help consumers understand their test results—either through the use of generative AI to help analyze and deliver personalized results, or through integration with telehealth services—could develop a competitive advantage.
Trend two: A new era for biomonitoring and wearables
Roughly half of all consumers we surveyed have purchased a fitness wearable at some point in time. While wearable devices such as watches have been popular for years, new modalities powered by breakthrough technologies have ushered in a new era for biomonitoring and wearable devices.
Wearable biometric rings, for example, are now equipped with sensors that provide consumers with insights about their sleep quality through paired mobile apps. Continuous glucose monitors, which can be applied to the back of the user’s arm, provide insights about the user’s blood sugar levels, which may then be interpreted by a nutritionist who can offer personalized health guidance.
Roughly one-third of surveyed wearable users said they use their devices more often than they did last year, and more than 75 percent of all surveyed consumers indicated an openness to using a wearable in the future. We expect the use of wearable devices to continue to grow, particularly as companies track a wider range of health indicators.
Implications for companies: While there is a range of effective wearable solutions on the market today for fitness and sleep, there are fewer for nutrition, weight management, and mindfulness, presenting an opportunity for companies to fill these gaps.
Wearables makers and health product and services providers in areas such as nutrition, fitness, and sleep can explore partnerships that try to make the data collected through wearable devices actionable, which could drive greater behavioral change among consumers. One example: a consumer interested in managing stress levels might wear a device that tracks spikes in cortisol. Companies could then use this data to make personalized recommendations for products related to wellness, fitness, and mindfulness exercises.
Businesses must keep data privacy and clarity of insights top of mind. Roughly 30 percent of China, UK, and US consumers are open to using a wearable device only if the data is shared exclusively with them. Additionally, requiring too much manual data input or sharing overly complicated insights could diminish the user experience. Ensuring that data collection is transparent and that insights are simple to understand and targeted to consumers’ specific health goals or risk factors will be crucial to attracting potential consumers.
Trend three: Personalization’s gen AI boost
Nearly one in five US consumers and one in three US millennials prefer personalized products and services. While the preference for personalized wellness products was lower than in years prior, we believe this is likely due to consumers becoming more selective about which personalized products and services they use.
Technological advancements and the rise of first-party data are giving personalization a new edge. Approximately 20 percent of consumers in the United Kingdom and the United States and 30 percent in China look for personalized products and services that use biometric data to provide recommendations. There is an opportunity to pair these tools with gen AI to unlock greater precision and customization. In fact, gen AI has already made its way to the wearables and app space: some wearables use gen AI to design customized workouts for users based on their fitness data.
Implications for companies: Companies that offer software-based health and wellness services to consumers are uniquely positioned to incorporate gen AI into their personalization offerings. Other businesses could explore partnerships with companies that use gen AI to create personalized wellness recommendations. | "================
<TEXT PASSAGE>
=======
Five trends shaping the consumer health and wellness space in 2024
Fifty-eight percent of US respondents to our survey said they are prioritizing wellness more now than they did a year ago. The following five trends encompass their newly emerging priorities, as well as those that are consistent with our earlier research.
Trend one: Health at home
The COVID-19 pandemic made at-home testing kits a household item. As the pandemic has moved into its endemic phase, consumers are expressing greater interest in other kinds of at-home kits: 26 percent of US consumers are interested in testing for vitamin and mineral deficiencies at home, 24 percent for cold and flu symptoms, and 23 percent for cholesterol levels.
At-home diagnostic tests are appealing to consumers because they offer greater convenience than going to a doctor’s office, quick results, and the ability to test frequently. In China, 35 percent of consumers reported that they had even replaced some in-person healthcare appointments with at-home diagnostic tests—a higher share than in the United States or the United Kingdom.
Although there is growing interest in the space, some consumers express hesitancy. In the United States and the United Kingdom, top barriers to adoption include the preference to see a doctor in person, a perceived lack of need, and price; in China, test accuracy is a concern for approximately 30 percent of consumers.
Implications for companies: Companies can address three critical considerations to help ensure success in this category. First, companies will want to determine the right price value equation for at-home diagnostic kits since cost still presents a major barrier for many consumers today. Second, companies should consider creating consumer feedback loops, encouraging users to take action based on their test results and then test again to assess the impact of those interventions. Third, companies that help consumers understand their test results—either through the use of generative AI to help analyze and deliver personalized results, or through integration with telehealth services—could develop a competitive advantage.
Trend two: A new era for biomonitoring and wearables
Roughly half of all consumers we surveyed have purchased a fitness wearable at some point in time. While wearable devices such as watches have been popular for years, new modalities powered by breakthrough technologies have ushered in a new era for biomonitoring and wearable devices.
Wearable biometric rings, for example, are now equipped with sensors that provide consumers with insights about their sleep quality through paired mobile apps. Continuous glucose monitors, which can be applied to the back of the user’s arm, provide insights about the user’s blood sugar levels, which may then be interpreted by a nutritionist who can offer personalized health guidance.
Roughly one-third of surveyed wearable users said they use their devices more often than they did last year, and more than 75 percent of all surveyed consumers indicated an openness to using a wearable in the future. We expect the use of wearable devices to continue to grow, particularly as companies track a wider range of health indicators.
Implications for companies: While there is a range of effective wearable solutions on the market today for fitness and sleep, there are fewer for nutrition, weight management, and mindfulness, presenting an opportunity for companies to fill these gaps.
Wearables makers and health product and services providers in areas such as nutrition, fitness, and sleep can explore partnerships that try to make the data collected through wearable devices actionable, which could drive greater behavioral change among consumers. One example: a consumer interested in managing stress levels might wear a device that tracks spikes in cortisol. Companies could then use this data to make personalized recommendations for products related to wellness, fitness, and mindfulness exercises.
Businesses must keep data privacy and clarity of insights top of mind. Roughly 30 percent of China, UK, and US consumers are open to using a wearable device only if the data is shared exclusively with them. Additionally, requiring too much manual data input or sharing overly complicated insights could diminish the user experience. Ensuring that data collection is transparent and that insights are simple to understand and targeted to consumers’ specific health goals or risk factors will be crucial to attracting potential consumers.
Trend three: Personalization’s gen AI boost
Nearly one in five US consumers and one in three US millennials prefer personalized products and services. While the preference for personalized wellness products was lower than in years prior, we believe this is likely due to consumers becoming more selective about which personalized products and services they use.
Technological advancements and the rise of first-party data are giving personalization a new edge. Approximately 20 percent of consumers in the United Kingdom and the United States and 30 percent in China look for personalized products and services that use biometric data to provide recommendations. There is an opportunity to pair these tools with gen AI to unlock greater precision and customization. In fact, gen AI has already made its way to the wearables and app space: some wearables use gen AI to design customized workouts for users based on their fitness data.
Implications for companies: Companies that offer software-based health and wellness services to consumers are uniquely positioned to incorporate gen AI into their personalization offerings. Other businesses could explore partnerships with companies that use gen AI to create personalized wellness recommendations.
https://www.mckinsey.com/industries/consumer-packaged-goods/our-insights/the-trends-defining-the-1-point-8-trillion-dollar-global-wellness-market-in-2024?stcr=E8E9B8D1DADC4FF7928252A2E8D12F2B&cid=other-eml-alt-mip-mck&hlkid=3ac2023292574ef9a3db1c1785acc32d&hctky=12113536&hdpid=0df4d40d-7d9b-4711-914d-82aea6c69268
================
<QUESTION>
=======
In the reference text, three trends are stated. I would like you to summarize the central idea of each trend. Their respective implication should be included in the summary. Finally, in trend one, consumers in the US are expressing a desire to have other kinds of at-home kits, could you retrieve the test with the lowest percentage?
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." |
Answer the question using only information from the provided context block.
| What are some of the benefits of online education? | INTRODUCTION Historically, postsecondary education in the United States was founded on the principles of the European system, requiring the physical presence of professors and students in the same location (Knowles, 1994). From 1626, with the founding of Harvard University (The Harvard Guide, 2004), to the development of junior colleges and vocational schools in the early 1900s (Cohen & Brawer, 1996; Jacobs & Grubb, 2003), the higher education system developed to prepare post-high school students for one of three separate tiers. The college and university system in the United States developed its own set of structures designed to prepare students for baccalaureate and graduate degrees. Junior colleges were limited to associate degrees, while vocational education institutions offered occupational certificates. In many cases, there was inadequate recognition of the postsecondary education offered at junior colleges and vocational education institutions, resulting in the inability of students to transfer to 4-year institutions (National Center for Education Statistics, 2006). In the mid-20th century, some junior colleges began to provide academic, vocational, and personal development educational offerings for members of the local communities. During this same period, junior or community colleges developed a role as transfer institutions for students who, because of time, preparedness, economics, or distance, could not begin their postsecondary education at a 4-year institution (Cohen & Brawer, 1996). Until the mid-1990s, the majority of transfer programs involved Associate of Arts (AA) and Associate of Science (AS) degrees. Associate of Applied Science (AAS) degrees were developed during the 1990s. The AAS degree was granted to those 2 who successfully completed the majority of their college program in vocational education. The creation of a variety of applied baccalaureate degrees allowed students who had previously thought of the AAS degree as a terminal program to complete a baccalaureate degree (Kansas Board of Regents, 2002-2003). Online education also became a strategy for students to access higher education in the 1990s (Allen & Seaman, 2007b). The proliferation of online courses alleviated some of the location-bound barriers to higher education, but online education was criticized as less rigorous than traditional classroom-based course work by traditional academicians. Russell attempted to address this argument with his 1999 meta-analysis of studies dating from the 1920s and covering multiple delivery models, including online education. Russell concluded there was no statistically significant difference in student achievement between courses offered online and those offered in the traditional classroom setting. Since the development of correspondence courses in the 1920s, researchers have attempted to ascertain if students participating in distance education are being shortchanged in their educational goals. No significant difference in grades has been found in the majority of studies designed to address this issue. Studies analyzing online student retention have shown significantly lower retention for online students. In the last 10 years, research studies have expanded to include variations of online education. These include strictly online, hybrid courses, Web-assisted classroom settings, and the traditional higher education course offered only as face-to-face instruction (Carmel & Gold, 2007). Online education continues to proliferate at the same time the number of secondary students in the United States overall is projected to increase (National Center 3 for Education Statistics [NCES], 2006). The projected increase of potential postsecondary students and online postsecondary options provides opportunities for increases in online education programs and courses. In 2000, NCES reported that over 65% of students in higher education were participating in online courses. In a 2007 study, Allen and Seaman estimated only 16% of those enrolled in online education courses are undergraduate students seeking their first degree, counter to the projected increase in traditional-age students. The majority of enrollees in online education are adults updating or advancing their credentials, creating an additional educational market for colleges and universities seeking to expand enrollment without adding physical space (Allen & Seaman, 2007a). For states and localities faced with a contradictory traditional-age enrollment decrease, these figures present an untapped market for higher education courses and programs. Background Researchers attempted to analyze the efficacy of distance education as far back as the 1920s when correspondence courses were created to meet the need of students not willing to attend a traditional classroom-based higher education setting. A meta-analysis of these studies resulted in “The No Significant Difference Phenomenon,” reported by Russell (2001). The results of over 355 studies were compiled, comparing various modes of delivery including correspondence, audio, television courses, and the newest wave of computer-facilitated instruction. Following analyses of studies completed prior to 2001, Russell concluded there was no difference in learning between students enrolled in distance education and those completing courses in the traditional setting. Studies completed since then have provided mixed results. Summers, Waigand, and Whittaker (2005) found there was no difference in GPA and retention between the 4 online and traditional classroom. Arle (2002) found higher achievement by online students, and Brown and Liedholm (2002) found GPA and student retention better in a traditional classroom setting. Student retention is an integral part of the student achievement conversation and is an issue for all forms of higher education. Degree-seeking students’ overall retention has been reported as less than 56% by NCES (2001). Long considered a problem in higher education, attention to the distance education model has shown even lower retention rates in online students than in students attending at the traditional college setting (Phipps & Meristosis, 1999). Research on different modalities, such as fully online and hybrid online courses, has produced mixed results (Carmel & Gold, 2007). No significant trend toward increased retention of students in any of the online modalities has been documented. Retention studies of transfer students have primarily included traditionally defined students transfering from a community college. Statistics have consistantly shown a lower retention rate for students transfering from a community college to a 4-year university than for students who began their post-high school education at a 4-year institution (NCES, 2006). Townsend’s studies of transfer students at the University of Missouri-Columbia also showed a lower baccalaureate retention rate for students who had completed an AAS degree than for students beginning their education at a 4-year institution (Townsend, 2002). Occupationally oriented bachelor’s degree completion programs are relatively new to higher education. Transfer programs in the liberal arts from community colleges to 4-year institutions were common by the 1990s. Townsend (2001), in her study 5 conducted at the University of Missouri–Columbia, observed the blurring of the lines between non-transferrable occupationally oriented undergraduate degrees and undergraduate degrees and certificates that were easily transferred. The study conducted by Townsend was among the first to recognize that many students who began their education at community and technical colleges had bachelor’s degree aspirations that grew after their completion of an occupationally-oriented degree. Laanan proposed that the increase in institutions offering AAS degrees necessitated new ways to transfer undergraduate credits (2003). The setting of this study is a medium-sized Midwestern campus located in Topeka, Kansas. Washburn University enrolls approximately 6000 students a year in undergraduate and graduate programs, including liberal arts, professional schools, and a law school (Washburn University, 2008). The Technology Administration (TA) program selected for the present study began in the 1990s as a baccalaureate degree completion program for students who had received an occupationally oriented associate degree at a Kansas community college or through Washburn’s articulation agreement with Kansas vocational-technical schools. This program provided students who previously had obtained an Associate of Applied Science degree in an occupational area an opportunity to earn a bachelor’s degree. Peterson, Dean of Continuing Education, Washburn University, stated that in early 1999, Washburn University began online courses and programs at the behest of a neighboring community college (personal communication, April 18, 2008). Washburn was asked to develop an online bachelor’s degree completion program for students graduating from community colleges and technical colleges with an Associate of Applied 6 Science degree. The TA program was among the first programs to offer the online bachelor’s degree completion option. The TA program offered its first online courses in Spring 2000. Online education at Washburn expanded to other programs and courses, to include over 200 courses (Washburn University, 2008). The original online partnership with two community colleges expanded to include 16 additional community colleges and four technical colleges in Kansas, as well as colleges in Missouri, California, Wisconsin, South Carolina, and Nebraska (Washburn University, 2008). An initial study in 2002 of student’s course grades and retention in online courses offered at Washburn showed no significant difference between students enrolled in online courses and students enrolled in traditional face-to-face course work (Peterson, personal communication, April 18, 2008). No studies of program retention have been completed. In 2008, Atkins reported overall enrollment at Washburn University decreased 6.7% from Fall 2004 to Fall 2008, from 7400 to 6901 students. During the same period, online course enrollment patterns increased 65%, from 3550 students to 5874 in 2007- 2008 (Washburn University, 2008). Atkins also reported that between 1998 and 2008, the ratio of traditional post-high school age students to nontraditional students enrolling at Washburn University reversed from 40:60 to 60:40. The shift in enrollment patterns produced an increase in enrollment in the early part of the 21st century; however, Washburn University anticipated a decrease in high school graduates in Kansas through 2016, based on demographic patterns of the state. The state figures are opposite the anticipated increase of traditional-age students nationally (NCES, 2008). The increase in 7 distance education students in relation to the anticipated decline in traditional-age students provided the focus for the study. Purpose of the Study Online education has become an important strategy for the higher education institution that was the setting of this study. First, the purpose of the study was to determine if there was a significant difference between the course grades of students participating in TA online courses and their traditional classroom-based counterparts. The second purpose of the study was to determine if there was a significant difference between course retention of students participating in TA online courses and their traditional classroom-based counterparts. The second part of the study was a replication of studies comparing modes of online course delivery to traditional classroom-based instruction (Carmel & Gold, 2007; Russell, 1999). A third purpose of the study was to determine if there was a significant difference between program retention of students who began the TA program in online courses and those who began the program enrolled in traditional face-to-face courses. The study’s purpose was to expand the knowledge base concerning online education to include its efficacy in providing baccalaureate degree completion opportunities. Research Questions Roberts (2004) stated research questions guide the study and usually provide the structure for presenting the results of the research. The research questions guiding this study were: 8 1. Is there is a statistically significant difference between students’ grades in online classes and traditional face-to-face classes? 2. Is there a statistically significant difference between course retention rates in online classes and traditional face-to-face classes? 3. Is there a statistically significant difference between program retention for students entering the program enrolled in online classes and students entering the program enrolled in traditional face-to-face classes? Overview of the Methodology A quantitative study was utilized to compare grades by course, course retention, and program retention of students enrolled in the online and traditional face-to-face TA program at Washburn University. Archival data from the student system at Washburn University were utilized from comparative online and traditional face-to-face classes in two separate courses. In order to answer Research Question 1, a sample of 885 students enrolled in online and traditional face-to-face courses was identified. The sample included students entering the program in the Fall semesters of 2002, 2003, 2004, 2005, and 2006 in both the online and traditional face-to-face classes. Two instructors were responsible for concurrent instruction of both the online and face-to-face classes for the period analyzed. A two-factor analysis of variance was used to analyze for the potential difference in the dependent variables, course grades due to delivery method (online and face-to-face), instructor (instructors A and B), and the potential interaction between the two independent variables (Research Question 1). A chi-square test for differences among proportions was used to analyze course and program retention (Research Questions 2 and 3). 9 Delimitations Roberts (2004) defined delimitations as the boundaries of the study that are controlled principally by the researcher. The delimitations for this study were 1. Only data from 2002 through 2008 from Technology Administration online and face-to-face courses were utilized. 2. The study was confined to students enrolled at Washburn University in the Technology Administration program. 3. Only grades and retention were analyzed. Assumptions Assumptions are defined as those things presupposed in a study (Roberts, 2004). The study was based on the following assumptions: 1. Delivery of content was consistent between online and face-to-face courses and instructors, 2. Course objectives were the same for paired online and traditional face-toface courses, 3. All students enrolled in the TA program met the same criteria for admission to the University, 4. All data entered in the Excel spreadsheets were correct, 5. All students enrolled in the TA program met the same criteria for grade point average and program prerequisites. 10 Definitions The following terms are defined for the purpose of this study: Distance education. Education or training courses delivered to remote locations via postal delivery, or broadcast by audio, video, or computer technologies (Allen, 2007). Dropout. A dropout is defined as a student who has left school and discontinued studies (Merriam-Webster's Collegiate Dictionary, 1998). Face-to-face delivery. This is a course that uses no online technology; content is delivered in person, either in written or oral form (Allen, 2007). Hybrid course. This course is a blend of the online and face-to-face course. A substantial proportion of the content is delivered online, typically using some online discussions and some face-to-face meetings (Allen, 2007). Online course. This defines a course where most or all of the content is delivered online via computer technologies. Typically, there are no face-to-face meetings (Allen, 2007). 2+2 PLAN. The Partnership for Learning and Networking is a collaborative set of online 2+2 baccalaureate degree programs developed by Washburn University. The programs require completion of an associate degree from one of the partner community or technical colleges (Washburn University, 2008). Retention. This term refers to the completion of a course by receiving a letter grade in a course, or a certificate of completion or degree for program completion (Washburn University, 2008). Web-assisted. A course that uses Web-based technology to facilitate what is essentially a face-to-face course (Allen, 2007). 11 Organization of the Study This study consists of five chapters. Chapter One introduced the role of distance education in higher education. Chapter One included the background of the study, the research questions, overview of the methodology, the delimitations of the study, and the definition of terms. Chapter Two presents a literature review, which includes the history of occupational postsecondary education, distance education, and studies relating to grades and retention of students involved in distance education. Chapter Three describes the methodology used for the research study. It includes the selection of participants, design, data collection, and statistical procedures of the study. Chapter Four presents the findings of the research study. Finally, Chapter Five provides a discussion of the results, conclusions, and implications for further research and practice. 12 CHAPTER TWO LITERATURE REVIEW This chapter presents the background for research into the efficacy of distance education in the delivery of higher education. Research studies have focused primarily on grades as a measure of the quality of distance education courses as compared to traditional face-to-face instruction. Utilizing grades has produced a dividing line among education researchers concerning the use of distance education as a delivery model. Retention in distance education has focused primarily on single courses, with little program retention data available. Data from retention studies in higher education have focused primarily on the traditional 4-year university student. Retention studies of community college students have produced quantitative results; however, these studies have been directed at community college students who identify themselves as transfer students early in their community college careers. Retention studies of students enrolled in occupationally oriented programs are limited. Statistical data of higher education shows an increased use of distance education for traditional academic courses as well as occupationally oriented courses. The increase in distance education courses and programs has provided a new dimension to studies of both grades and retention. The recognition of this increase, as well as questions concerning its impact on student learning and retention, produced the impetus for this study. The following review of the literature represents the literature related to this research study. Through examination of previous research, the direction of the present study was formulated. Specifically, the chapter is organized into four sections: (a) the 13 history of occupational transfer programs; (b) the history and research of distance education, including occupational transfer programs utilizing distance education; (c) research utilizing grades as an indicator of student learning in online education; and (d) research focusing on student retention in higher education, including student retention issues in transfer education and online transfer courses and programs. History of Occupational Transfer Programs The measure of success in higher education has been characterized as the attainment of a bachelor’s degree at a 4-year university. Occupationally oriented education was considered primarily a function of job preparation, and until the 1990s was not considered transferrable to other higher education institutions. Occupational transfer programs are a recent occurrence within the postsecondary system that provides an additional pathway to bachelor’s degree completion. Historically, the postsecondary experience in the United States developed as a three-track system. Colleges were established in the United States in 1636 with the founding of Harvard College (The Harvard Guide, 2004). Junior colleges were first founded in 1901 as experimental post-high school graduate programs (Joliet Junior College History, 2008). Their role was initially as a transfer institution to the university. When the Smith-Hughes Act was passed in 1917, a system of vocational education was born in the United States (Jacobs & Grubb, 2003), and was designed to provide further education to those students not viewed as capable of success in a university setting. Vocational education, currently referred to as occupational or technical education, was not originally designed to be a path to higher education. The first programs were designed to help agricultural workers complete their education and increase their skills. 14 More vocational programs were developed during the early 20th century as industrialization developed and as increasing numbers of skills were needed by workers in blue-collar occupations (Jacobs & Grubb, 2003). In the mid-20th century, some junior colleges expanded their programs beyond academic selections to provide occupational development and continuing education. Because of the geographic area from which they attracted students, junior colleges developed a role as “community” colleges. They also solidified their role as transfer institutions for students who, because of time, preparedness, economics, or distance, could not begin their postsecondary education at a 4-year institution (Cohen & Brawer, 1996). Until the mid-1990s, the majority of transfer programs to 4-year universities involved traditional academic degrees, including the Associate of Arts (AA) and Associate of Science (AS) degrees. Occupational programs and continuing education were viewed as terminal and non-transferrable. In 1984, Congress authorized the Carl Perkins Vocational and Technical Education Act (P.L. 98-524). In the legislation, Congress responded to employers’ concerns about the lack of basic skills in employees by adding academic requirements to vocational education legislation. Vocational program curriculum was expanded to include language arts, mathematics, and science principles, and the curriculum reflected the context of the program. The Secretary’s Commission on Achieving Necessary Skills (SCANS) was created in 1990 to determine the skills young people need to succeed in the world of work (U.S. Department of Labor, 2000). In the second Carl Perkins reauthorization in 1990 (P.L. 105-332), Congress responded to the report, which targeted academic and job skills, by outlining a seamless system of vocational and academic 15 education to prepare vocational students to progress into and through higher education. This emphasis led to the development of Associate of Applied Science (AAS) degrees during the 1990s. Granted to those who have successfully completed programs in the applied arts and sciences for careers, AAS degrees were seen as terminal (Kansas Board of Regents, 2002-2003). But as one goal was attained, conversation turned to creating a pathway from occupational associate degrees to bachelor’s degree completion. The desire of students to continue from technical degrees to a baccalaureate was not a new idea. In a paper presented in 1989 to the American Technical Association national conference, TrouttErvin and Morgan’s overview of 2+2 programs showed acceptance of AAS degrees at traditional universities was generally non-existent. Their suggestion for an academic bridge from early technical education to baccalaureate programs highlighted programs accepting AAS degrees toward baccalaureate completion were an exception rather than a rule (Troutt-Ervin & Morgan, 1989). It was not until the late 1990s that applied baccalaureate degrees recognized credits from technical degree students who had previously thought of themselves in a terminal program to complete their baccalaureate degree (Wellman, 2002). Despite the advance of recognition of AAS degrees, standard definitions of transfer students continued to exclude students who completed technical programs. The U.S. Department of Education did not include students receiving an Associate of Applied Science degree in the definition of students preparing for transfer to 4-year colleges (Bradburn, Hurst, & Peng, 2001; Carnevale, 2006). Most states had comparable policies in place concerning core academic curriculum, articulation agreements, transfer of credit, 16 and statewide transfer guides. There was no general recognition of occupational credit transfer. Only a few states, including Kansas, Missouri, and Washington, allowed credits earned in occupationally oriented degrees to transfer to 4-year institutions (Townsend, 2001). No state had set clear goals for the transference of occupational credits between institutions or for the state as a whole (Wellman, 2002). Despite the lack of recognition of occupational transfer credit at the federal level, a new definition of transfer education had emerged. Initially defined as the general education component of the first 2 years of a baccalaureate, the definition of transfer education now included any courses that transferred to a 4-year college, regardless of the nature of the courses (Townsend, 2001). The line between vocational schools, community colleges, and 4-year institutions blurred in the United States as employers and students increasingly made business decisions regarding education and workforce development. Employers increasingly asked for employees with academic and technical skills, as well as critical thinking skills and personal responsibility (U.S. Department of Labor, 2000). Returning students themselves were more attuned to the demands of the 21st century workforce. Their desire to return to higher education, coupled with the economy and the variety of options available to them, required a more adaptive higher education system (Carnevale, 2006). There was growing demand among new and returning students for higher education opportunities responsive to their needs. The expanding needs of the returning student provided opportunities for higher education to respond by utilizing different delivery models. 17 Distance Education Online education became a strategy for postsecondary institutions when the first correspondence courses were initiated with the mail service in the early 20th century (Russell, 1999). As various technologies emerged, distance education utilized television and video models, in addition to paper-based correspondence courses. The expansion of distance education utilizing computer technologies renewed academic debate over the efficacy of the delivery model. Online education utilizing the Internet became a significant factor in the 1990s, prompting renewed evaluation of the use of distance learning opportunities (Russell, 1999, Phipps & Meristosis, 1999). In 1999–2000, the number of students who took any distance education courses was 8.4% of total undergraduates enrolled in postsecondary education (NCES, 2000). In 2000, the report of the Web-Based Education Commission to the President and Congress concluded that the Internet was no longer in question as a tool to transform the way teaching and learning was offered. The Commission recommended that the nation embrace E-learning as a strategy to provide on-demand, high-quality teaching and professional development to keep the United States competitive in the global workforce. They also recommended continued funding of research into teaching and learning utilizing web-based resources (Web-Based Education Commission, 2000). The acceptance of the importance of the Internet for delivery of higher education opened new opportunities for research and continued the academic debate of the quality of instruction delivered in online education courses and programs. In a longitudinal study from 2002-2007, The Sloan Consortium, a group of higher education institutions actively involved in online education, began studies of online 18 education in the United States over a period of 5 years. In the first study, researchers Allen and Seaman (2003) conducted polls of postsecondary institutions involved with online education and found that students overwhelming responded to the availability of online education, with over 1.6 million students taking at least one online course during the Fall semester of 2002. Over one third of these students took all of their courses online. The survey also found that in 2002, 81% of all institutions of higher education offered at least one fully online or blended course (Allen & Seaman, 2003). In their intermediate report in 2005, Allen and Seaman postulated that online education had continued to make inroads in postsecondary education, with 65% of schools offering graduate courses and programs face-to-face also offering graduate courses online. Sixty-three percent of undergraduate institutions offering face-to-face courses also offered courses online. From 2003 to 2005, the survey results showed that online education, as a long-term strategy for institutions, had increased from 49% to 56%. In addition, core education online course offerings had increased (Allen & Seaman, 2005). In Allen and Seaman’s final report (2007b) for the Sloan Consortium, the researchers reported that almost 3.5 million students participated in at least one online course during the Fall 2006 term, a nearly 10% increase over the number reported in the previous year. Allen and Seaman also reported a 9.7% increase in online enrollment, compared to the 1.5% growth in overall higher education. They found by 2007, 2-year institutions had the highest growth rates and accounted for over the half the online enrollments in the previous 5 years. The researchers concluded, based on a survey 19 conducted as part of the research, institutions believed that improved student access was the top reason for offering online courses and programs (Allen & Seaman, 2007b). Community colleges began embracing distance education in the 1920s as part of their mission to provide low-cost, time-effective education. Community colleges initially provided correspondence courses by mail, but later switched to television and video courses as technology improved (Cohen & Brawer, 1996). In 2001, over 90% of public 2- year colleges in the United States provided distance education courses over the Internet (NCES, 2001). Vocational education, by the nature of its instructional format, was among the last of the educational institutions to participate in distance education. Because of the kinesthetic nature of instruction, vocational education leaders began investigating distance education opportunities in the 1990s, relying on the method to provide only the lecture portion of instruction. By 2004, only 31% of students enrolled in vocational schools had participated in some form of distance education during their program of study (NCES, 2005). In 2008, hands-on instruction in programs such as automobile mechanics and welding, and the clinical portion of health occupations programs, continued to be taught in the traditional classroom setting (NCES, 2008). Analysis of data reported by the NCES indicated that distance education had become a staple for higher education institutions. At both the 4-year and 2-year university level, over 65% of institutions offered more than 12 million courses in 2006-2007 by distance education. While vocational education had traditionally been more hands-on, distance education had become more prevalent in providing opportunities for students to participate in components of the system over the Internet (NCES, 2008). 20 Distance education became the prevalent strategy for higher education institutions to expand their services to new and returning students, without the financial implications of capital expansion. Higher education utilized the strategy to market to students outside their traditional geographic reach by utilizing the power of the Internet. The increasing demand from students of all ages for online opportunities provided new ground for the expansion of higher education opportunities. Grades as an Indicator of Quality of Student Learning The grading system in the United States educational system has served as an indicator of knowledge for over 100 years. Educators have utilized high school grades as a sorting mechanism in American schools to determine postsecondary opportunities. Modern society has accepted honors attainment, graduation honors, and course grades as an indicator of knowledge acquisition in postsecondary education. Stray (2001) reported that the use of grading in schools can be traced to the industrial revolution and the development of factories. William Farish of Cambridge University developed the first grading system in higher education in 1792 (Stray, 2001). Farish mimicked the system established by factories of the time: grade A being the best. The thought was that Farish employed the grading system in order to teach more students, an aberration at that time when instructors rarely had more than a few. The demand for more higher education opportunities prompted Farish to open his class to more students, and as such, led to his use of a sorting system. This was the first known record of grading utilized in classrooms to measure student achievement (Stray, 2001). 21 Smallwood (1935) reported the first grading in higher education at Yale University in 1792. Stiles, President of Yale University, directed the use of the scale in the late 18th century. However, Smallwood noted it was not until 1813 that any record of grades or marking appeared. Using a scale of 100, philosophy and mathematic professors instituted the first use of a marking instrument in the 1800s at Harvard. Smallwood noted early systems were experimental, utilizing different numerical scales, with no standardized system in place between higher education institutions. It was not until the late 1800s that faculty began using descriptors, such as A and B, to rank students according to a predetermined numerical scale (Smallwood, 1935). Experimentation with evaluation of achievement continued into the early 20th century, when educational psychologists, including Dewey and Thorndike, attempted to compare grading scales with intelligence testing. Thorndike’s philosophy of standardized testing and grading survived the 20th century, and his quote, “Whatever exists at all exists in some amount” (Thorndike, 1916, as cited in Ebel & Frisbie, p. 26) has been utilized in educational measurement textbooks as a validation of the use of standards of measurement to measure achievement (Ebel & Frisbie, 1991). The use of grades expanded to community colleges, high schools, and elementary schools in the early 1900s (Pressey, 1920). The use of grades throughout the educational system is fairly standardized today with the 4.0 scale. It is this standardization that allows comparison of grades as achievement between educational levels and institutions (Ebel & Frisbie, 1991) and allows grades to be utilized as a measure for comparison of educational achievement. 22 Researchers analyzing the success of community college transfer students have traditionally studied the grades of the traditional transfer student with an AA or AS degree. Keeley and House’s 1993 study of sophomore and junior transfer students at Northern Illinois University analyzed “transfer shock” (p. 2) for students matriculating from community colleges. The researchers found students who transferred from a community college obtained a grade point average significantly lower in their first semester than did students who began their college career at a 4-year institution. However, the results of the longitudinal studies showed that transfer students who persisted to graduation showed an equivalent GPA at baccalaureate completion (Keeley & House, 1993). Students who transferred from occupationally oriented degree programs typically were not included in traditional studies of transfer students. While the research in general does not include AAS students in traditional transfer data, limited conclusions were available comparing AAS students to traditional 4-year college attendees. Townsend’s study at the University of Missouri-Columbia (2002) showed no difference in grades at baccalaureate graduation between students with an AA/AS degree and students with an AAS degree. The use of grades as an indicator of the level of student achievement has been relied upon by studies comparing traditional classroom instruction and distance instruction. Research analyzing the effectiveness of student learning in distance education began with the first correspondence courses offered utilizing the mail service (Russell, 1999). The study of effectiveness of correspondence courses expanded to include new technologies, such as television and video courses, and increased with the proliferation of 23 online educational offerings. Researchers continued to challenge the effectiveness of learning methods not delivered in traditional higher education settings. In 1991, Russell reviewed over 355 studies, dating from the 1930s and continuing through the late 1980s, and found no significant difference in student learning using any form of distance education, as compared with students in classroom-based instruction (Russell, 1999). Russell’s conclusion formed the basis for a series of works collectively known as “No Significant Difference.” Russell’s conclusion from his studies follows: The fact is the findings of comparative studies are absolutely conclusive; one can bank on them. No matter how it is produced, how it is delivered, whether or not it is interactive, low tech or high tech, students learn equally well with each technology and learn as well as their on-campus, face-to-face counterparts even though students would rather be on campus with the instructor if that were a real choice. (p. xviii) Overwhelmingly, studies have supported Russell’s conclusions, including Neuhauser’s (2002) study of traditional face-to-face education and online education in a business communications class at a large urban university in North Carolina. Neuhauser concluded there was no significant difference in pre- and post-test scores of students enrolled in online and traditional communications classes. In addition, Neuhauser found no significant difference in final grades, homework grades, and grades on research papers, even though learners in the online course were significantly older than were learners in the traditional face-to-face section. The Summers et al. (2005) research included a comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class. 24 The study, conducted at the University of Missouri-Columbia, included undergraduate nursing students who were tested on both their pre- and post-course knowledge of statistics. Their results indicated that utilizing grades as an indicator of knowledge showed no significant difference between the online and traditional classroom students. In their meta-analysis, Machtmes and Asher (2002) reviewed 30 studies and concluded there did not appear to be a difference in achievement, as measured by grades, between distance and traditional learners. As technology use continued to evolve in online education, various studies were conducted to determine whether different delivery methods created a difference in the grades of online students compared to their face-to-face counterparts. A study conducted by Carmel and Gold (2007) supported Russell’s original conclusion by analyzing specific types of online platforms and delivery models. Carmel and Gold’s study included hybrid and traditional classroom-based instruction. They analyzed results from 164 students in 110 courses and found no significant difference in student achievement based on grades between students enrolled in either delivery method. Additional studies supporting Russell’s theory have crossed multiple content areas and delivery models. Brown and Liedholm’s (2002) study at Michigan State University included microeconomics students in virtual, hybrid, and traditional classroom-based instruction. The study included 389 students in the traditional setting, 258 in the hybrid delivery section and 89 students enrolled in online education. No significant difference in student learning as measured by end of course grades was found. Research also showed type of course discipline is not affected by the online delivery model. Schulman and Simms (1999) compared pretest and posttest scores of 25 students enrolled in an online course and a traditional course at Nova Southeastern University. The researchers compared 40 undergraduate students enrolled in online courses and 59 undergraduate students enrolled in the classroom setting of the same course. Results indicated that the students who select online courses scored higher than traditional students scored on the pretest results. However, posttest results showed no significant difference for the online students versus the in-class students. Schulman and Simms concluded that online students were learning equally as well as their classroombased counterparts. Reigle’s (2007) analysis across disciplines at the University of San Francisco and the University of California found no significant difference between online and face-to-face student grade attainment. Shachar and Neumann (2003) conducted a meta-analysis that estimated and compared the differences between the academic performance of students enrolled in distance education compared to those enrolled in traditional settings over the period from 1990-2002. Eighty-six studies containing data from over 15,000 participating students were included in their analysis. The results of the meta-analysis showed that in two-thirds of the cases, students taking courses by distance education outperformed their student counterparts enrolled in traditionally instructed courses. Lynch, during the use of the “Tegrity” system, a brand-specific online platform at Louisiana State University, found that students’ grades were slightly better after utilizing the technology than when the traditional approach was used (Lynch, 2002). Initial results of a University of Wisconsin-Milwaukee study of 5000 students over 2 years indicated that the U-Pace online students performed 12% better than their traditional Psychology 101 counterparts on the same cumulative test (Perez, 2009). Arle’s (2002) study found 26 students enrolled in online human anatomy courses at Rio Salado College scored an average of 6.3% higher on assessments than the national achievement average. Students were assessed using a national standardized test generated by the Human Anatomy and Physiology Society, whose norming sample is based entirely on traditional classroom delivery (Arle, 2002). In a study conducted by Stephenson, Brown, and Griffin (2008), comparing three different delivery styles (traditional, asynchronous electronic courseware, and synchronous e-lectures), results indicated no increased effectiveness of any delivery style when all question types were taken into account. However, when results were analyzed, students receiving traditional lectures showed the lowest levels on questions designed to assess comprehension. Research found supporters in higher education academic leaders. In a 2006 survey of Midwestern postsecondary institutions concerning their online offerings, 56 % of academic leaders in the 11 states rated the learning outcomes in online education as the same or superior to those in face-to-face instructional settings. The proportion of higher education institutions believing that online learning outcomes were superior to those for face-to-face outcomes was still relatively small, but had grown by 34% since 2003, from 10.2 to 13.7 % (Allen & Seaman, 2007b). This belief added merit to the conclusions supported by Russell and others. Russell’s (1999) “no significant difference” conclusion had its detractors. The most commonly cited is Phipps and Merisotis (1999), who reviewed Russell’s original meta-analysis (1999) and reported a much different conclusion. They concluded that the overall quality of the original research was questionable, that much of the research did 27 not control for extraneous variables, and therefore it could not show cause and effect. They included in their findings evidence that the studies utilized by Russell (2000) in the meta-analysis did not use randomly selected subjects, did not take into effect the differences among students, and did not include tests of validity and reliability. The Phipps and Merisotis (1999) analysis included the conclusion that research has focused too much on individual courses rather than on academic programs, and has not taken into account differences among students. They postulated that based on these conclusions, there is a significant difference in the learning results, as evidenced by grades, of students participating in distance education as compared to their classroombased peers. Their analysis of Russell’s original work questioned both the quality and effectiveness of research comparing distance and traditional education delivery. While there has been ongoing conjecture that online education students are not receiving an equivalent learning experience compared to their traditional classroom counterparts, studies utilizing grades as an indicator of student learning have produced little evidence of the disparity. The incidence of studies showing significant negative differences in grades of online learners is small. Higher education institutions have indicated their support for online education, and its continued growth has allowed studies such as the present research to contribute to ongoing dialogue. Student Retention in Postsecondary Education Persistence and retention in higher education is an issue that has intrigued researchers for over 50 years. Quantitative studies conducted in the mid-20th century produced data that caused researchers to look at low retention rates in higher education 28 and search for answers. This question has continued to consume researchers and higher education institutions. In 1987, Tinto attempted to summarize studies of individual student retention in higher education by proposing a theory to allow higher education administrators to predict success and support students (Tinto, 1987). Tinto’s model of student engagement has been in use for over 20 years as higher education administrators and faculty attempt to explain student retention issues at universities and colleges. Tinto’s model primarily focused on factors of student engagement: How students respond to instructors, the higher education community itself, and students’ own engagement in learning are the primary factors Tinto theorized as determining the student’s retention. In the concluding remarks to his 1987 treatise on retention, Tinto acknowledged that persistence in higher education is but one facet of human growth and development, and one that cannot necessarily be attributed to a single factor or strategy. Tinto’s (1987) original study of student retention included the observation that student retention is a complicated web of events that shape student leaving and persistence. He observed that the view of student retention had changed since the 1950s, when students were thought to leave due to lack of motivation, persistence, and skills, hence the name dropout. In the 1970s, research began to focus on the role of the environment in student decisions to stay or leave. In the 1990s, Tinto proposed that the actions of the faculty were the key to institutional efforts to enhance student retention (Tinto, 2007). This was a significant addition to his theory, placing the cause on the instructor instead of the student, and it has done much to influence retention strategies 29 utilized in higher education institutions (Tinto, 2007). Tinto’s studies have driven research in both traditional retention studies and those involving distance education. Studies of the persistence of the postsecondary student routinely focus on 4-year postsecondary education. It is only within the last 20 years that persistence studies have included community college students and occupational students, acknowledging that their reasons for entering the postsecondary community are different from the traditional 4- year higher education participant (Cohen & Brawer, 1996). With different avenues to a baccalaureate degree more prevalent, the research into college persistence has expanded to include other types of programs and students. Postsecondary student retention rates routinely utilize data from longitudinal studies of students entering in a Fall semester and completing a bachelor’s program no more than 6 years later (NCES, 2003). The National Center for Education Statistics reported that 55% of those seeking a baccalaureate degree would complete in 6 years (NCES, 2003). The report acknowledged institutions are unable to follow students who transfer to other institutions; they are able to report only the absence of enrollment in their own institution. Research has also found a large gap between community college entrants and 4- year college entrants in rates of attaining a bachelor’s degree. Dougherty (1992) reported that students entering community college receive 11 to 19% fewer bachelor’s degrees than students beginning at a 4-year university. Dougherty postulated that the lower baccalaureate attainment rate of community college entrants was attributable to both their individual traits and the institution they entered (Dougherty, 1992). 30 Studies of student retention of community college also vary based on the types of students. Community college retention rates are routinely reported as lower than traditional 4-year institutions (NCES, 2007). Cohen and Brawer (1996) attributed the differences in retention to the difference in the mission. In many instances, students did not enroll in a community college in order to attain a degree (Cohen & Brawer, 1996). The most recent longitudinal study in 1993 showed a retention rate of 55.4% of students after 3 years (NCES, 2001). Of community college students, only 60.9% indicated a desire to transfer later to a baccalaureate degree completion program (NCES, 2003). While retention data collected by the federal government (NCES, 2003) did not include students with an AAS degree, Townsend’s studies of the transfer rates and baccalaureate attainment rates of students in Missouri who had completed an Associate of Arts and students who had completed an Associate of Applied Science degree was 61% compared to 54% (Townsend, 2001). Vocational or occupational programs have reported retention rates as “program completion,” a definition involving completion of specific tasks and competencies instead of grades and tied to a limited program length. This state and federal requirement indicates program quality and ensures continued federal funding. In 2001, the U.S. Department of Education reported a 60.1% completion rate of postsecondary students enrolled in occupational education (NCES, 2007). Until 1995, the reasons for students leaving was neither delineated nor reported; it was not until federal reporting requirements under the Carl Perkins Act of 1994 that institutions were required to explore why students were not retained in vocational programs (P.L. 105-332). 31 Distance education provided a new arena for the study of student persistence. Theorists and researchers have attempted to utilize Tinto’s model of student persistence to explain retention issues involved with distance education. However, Rovai (2003) analyzed the differing student characteristics of distance learners as compared to the traditional students targeted by Tinto’s original models and concluded that student retention theories proposed from that population were no longer applicable to distance education learners. Rovai proposed that distance educators could address retention in ways that traditional higher education has not. He suggested that distance educators utilize strategies such as capitalizing on students’ expectations of technology, addressing economic benefits and specific educational needs to increase student retention in courses (Rovai, 2003). The expanded use of technology created a distinct subset of research into student retention issues. In 2004, Berge and Huang developed an overview of models of student retention, with special emphasis on models developed to explain the retention rates in distance education. Their studies primarily focused on the variables in student demographics and external factors, such as age and gender, which influence persistence and retention in online learning. Berge and Huang found that traditional models of student retention such as Tinto’s did not acknowledge the differences in student expectations and goals that are ingrained in the student’s selection of the online learning option. Other researchers have attempted to study retention issues specifically for online education. In a meta-analysis, Nora and Snyder (2009) found the majority of studies of online education focused on students’ individual characteristics and individual 32 perceptions of technology. Nora and Snyder concluded that researchers attempt to utilize traditional models of student engagement to explain student retention issues in distance or online learning courses, with little or no success. This supported Berge and Huard’s conclusions. Nora and Snyder (2009) also noted a dearth of quantitative research. Few quantitative studies exist that support higher or equal retention in online students compared to their classroom-based counterparts. One example is the Carmel and Gold (2007) study. They found no significant difference in student retention rates between students in distance education courses and their traditional classroom-based counterparts. The study utilized data from 164 students, 95 enrolled in classroom-based courses and 69 enrolled in a hybrid online format. Participants randomly self-selected and were not all enrolled in the same course, introducing variables not attributed in the study. The majority of quantitative studies instead concluded there is a higher retention rate in traditional classrooms than in distance education. In the Phipps and Merisotis (1999) review of Russell’s original research, which included online education, results indicated that research has shown even lower retention rates in online students than in students attending classes in the traditional college setting. The high dropout rate among distance education students was not addressed in Russell’s meta-analysis, and Phipps and Merisotis found no suitable explanation in the research. They postulated that the decreased retention rate documented within distance education studies skews achievement data by excluding the dropouts. Diaz (2002) found a high drop rate for online students compared to traditional classroom-based students in an online health education course at Nova Southeastern. Other studies have supported the theory that retention of online students is far below that 33 of the traditional campus students. In 2002, Carr, reporting for The Chronicle of Higher Education, noted that online courses routinely lose 50 % of students who originally enrolled, as compared to a retention rate of 70-75% of traditional face-to-face students. Carr reported dropout rates of up to 75% in online courses as a likely indicator of the difficultly faced in retaining distance education students who do not routinely meet with faculty. The data have not been refuted. As community colleges began utilizing distance education, retention rates were reported as higher than traditional students (Nash, 1984). However, the California Community College System report for Fall 2008 courses showed inconsistent retention results for distance education learners, varying by the type of course. Results indicated equivalent retention rates for online instruction compared to traditional coursework in the majority of courses. Lower retention rates were indicated in online engineering, social sciences, and mathematics courses as compared to traditional classroom instructional models (California Community Colleges Chancellor's Office, 2009). Due to the limited number of vocational/technical or occupational courses taught in the online mode, there was little data on student retention. In 1997, Hogan studied technical course and program completion of students in distance and traditional vocational education and found that course completion rates were higher for distance education students. However, program completion rates were higher for traditional students than for students enrolled in distance education (Hogan, 1997). In summary, studies of retention have focused primarily on student characteristics while acknowledging that postsecondary retention rates vary according to a variety of factors. Research showed mixed results concerning the retention rate of online students, 34 though quantitative data leans heavily toward a lower course retention rate in online students. Data from 4-year universities have shown lower retention rates for online students than for traditional face-to-face students, while community colleges have shown inconsistent results. Data from vocational-technical education has been limited, but course retention rates are higher for online students, while program retention rates are lower. No significant research factor affecting retention has been isolated between students in online baccalaureate completion programs and students participating in traditional classroom-based settings. Summary Research studies have been conducted analyzing student retention in higher education, transfer and retention of students from community colleges to universities, the impact of distance education, and student achievement and retention factors related to distance education. However, no comparative research was identified that compared the achievement and retention of students participating in an occupationally oriented transfer program utilizing both online education and traditional classroom-based instruction. Chapter Three addresses the topics of research design, hypotheses, and research questions. Additionally, population and sample, data collection, and data analysis are discussed. 35 CHAPTER THREE METHODOLOGY The purpose of this study was to determine if there is a significant difference between course grades of students enrolled in online Technology Administration courses and their traditional classroom-based counterparts. The study also examined if there is a significant difference between course retention and program retention of students enrolled in online Technology Administration courses and their traditional classroombased counterparts. The methodology employed to test the research hypotheses is presented in this chapter. The chapter is organized into the following sections: research design, hypotheses and research questions, population and sample, data collection, data analysis, and summary. Research Design A quantitative, quasi-experimental research design was selected to study grades, course retention, and program retention in students enrolled in the Technology Administration program. The design was chosen as a means to determine if significant differences occur between online and face-to-face students by examining numerical scores from all participants enrolled, and retention rates in both courses and programs in the Technology Administration program. Hypotheses and Research Questions This study focused on three research questions with accompanying hypotheses. The research questions and hypotheses guiding the study follow. 36 Research Question 1: Is there is a statistically significant difference between students’ grades in online classes and traditional face-to-face classes? H1: There is a statistically significant difference in course grades of students participating in online courses and students enrolled in a traditional classroom setting at the 0.05 level of significance. Research Question 2: Is there a statistically significant difference between course retention rate of students in online classes and traditional face-to-face classes? H2: There is a statistically significant difference in student course retention between students participating in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. Research Question 3: Is there a statistically significant difference in program retention between students who entered the program in online classes and students who entered the program in traditional face-to-face classes? H3: There is a statistically significant difference in program retention between students who begin the Technology Administration program in online courses and students who begin in face-to-face courses at the 0.05 level of significance. Population and Sample The two populations selected were students enrolled in online and face-to-face courses. The sample included students enrolled in Technology Administration courses. Student enrollment was analyzed for all Technology Administration courses in the program sequence to determine the number of samples available in online and face-toface classes. The course enrollment data for the sample are outlined in Table E1. The subsample of the data utilized for the study is presented in Table 1. 37 Table 1 Technology Administration Enrollment Data Year Instructor TA 300 TA310 FTF OL FTF OL Spring 02 A 14 25 Fall 02 A 11 20 9 26 Spring 03 A 29 38 Fall 03 A 20 29 13 34 Spring 04 B 32 25 Fall 04 B 18 32 10 28 Spring 05 B 23 31 Fall 05 B 15 28 11 28 Spring 06 B 13 30 Fall 06 B 14 24 24 32 Spring 07 B 15 33 Fall 07 B 16 23 27 30 Spring 08 B 22 3529 TOTAL 94 156 242 395 Note: TA 300 Evolution and Development of Technology, TA 310 Technology and Society The subsample for hypothesis 1 and hypothesis 2 included all students enrolled in two entry-level courses required for completion of the Technology Administration program: TA 300 Evolution and Development of Technology, and TA 310 Society and 38 Technology. The university offered the courses in online and face-to-face formats during the period of the study. Two instructors, identified as A and B, were involved with teaching the online and face-to-face courses. Two courses were selected that met the following criteria: (a) the same faculty member taught both courses, (b) the courses were offered over the period of the study consistently in online and face-to-face instruction, and (c) the syllabi for simultaneous online and face-to-face sections were identical. For hypothesis 3, data included records of all students enrolled in TA 300 Evolution and Development of Technology for the Fall semesters of 2002, 2003, 2004, 2005, and 2006. The course was selected for inclusion in the study based on the following criteria: (a) student enrollment in the course was the result of declaration of the Technology Administration program major and (b) parameters of the study allowed students 2 or more years to complete the program requirements. For the purpose of the study, all student names were removed. Data Collection An Institutional Review Board (IRB) form was prepared for Washburn University approval prior to data collection. The study was designated as an exempt study. The Washburn University IRB form is provided in Appendix A. Approval of the IRB was transmitted by e-mail. A copy is located in Appendix B. In addition, an IRB was submitted to Baker University. The form is located in Appendix C. The Baker IRB approval letter is located in Appendix D. Washburn University had two types of data collection systems in place during the period identified for the study, Spring 2002 through Spring 2008. The AS 400 data collection system generated paper reports for 2002 and 2003. The researcher was allowed 39 access to paper records for 2002 and 2003. Enrollment results for all technology administration sections for 2002-2003 were entered manually into an Excel spreadsheet. In 2004, the University transferred to the Banner electronic student data management system. All records since 2004 were archived electronically and were retrieved utilizing the following filters for data specific to students enrolled in the identified Technology Administration courses: TA course designation and specific coding for year and semester to be analyzed (01 = Spring semester, 03 = Fall semester, 200X for specified year). Results retrieved under the Banner system were saved as an Excel spreadsheet by the researcher. The course enrollment data for the sample are presented in Tables E1 and E2. Student transcripts and records were analyzed to determine program completion or continued enrollment in the program for program retention analysis. Documents examined included paper student advising files located within the Technology Administration department and specific student records housed within the Banner reporting system. Technology Administration course TA 300 was selected based on the following: (a) It is a required entry course only for Technology Administration majors, and (b) TA 310 is a dual enrollment course for business department majors. Data Analysis Data analysis for all hypothesis testing was conducted utilizing SPSS software version 16.0. The software system provided automated analysis of the statistical measures. To address Research Question 1, a two-factor analysis of variance was used to analyze for a potential difference in delivery method (online and face-to-face), potential 40 difference in instructor (instructors A and B), and potential interaction between the two factors. When the analysis of variance reveals a difference between the levels of any factor, Salkind (2008) referred to this as the main effect. This analysis produces three F statistics: to determine if a difference in grades of online students as compared to their classroom based counterparts was affected by a main effect for delivery, a main effect for instructor, and for interaction between instructor and delivery. Chi-square testing was selected to address research questions 2 and 3. The rationale for selecting chi-square testing was to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Salkind, 2008). If the obtained chi-square value is greater than the critical value, it indicates there is sufficient evidence to believe the research hypothesis is true. For research question 2, a chi-square test for differences between proportions analyzed course retention of online and face-to-face students at the end of semester. For Research Question 3, a chi-square test for differences between proportions analyzed program retention comparing students who began the program in the online section of TA 300 to the students who began in the face-to-face section. Limitations of the Study Roberts (2004) defined the limitations of the study as those features of the study that may affect the results of the study or the ability to generalize the results. The limitations of this study included (a) potential for data entry error, (b) curriculum modifications not reflected in the syllabi made by instructors over the period of the study, (c) behavior of the instructors during delivery in the two different formats, and (d) 41 rationale of students for selecting one course delivery method over another. These may affect the generalizability of this study to other populations. Summary This chapter described the research design, population and sample, hypotheses, data collection, and analysis used in this research study. Statistical analysis using twoway analysis of variance and chi-square were used to determine if there are significant statistical differences in the course grades, course retention, and program retention of students enrolled in online classes as compared to their face-to face counterparts. The results of this study are presented in Chapter Four. 42 CHAPTER FOUR RESULTS The study had three main purposes. The first purpose was to determine if there was a difference in grades between students in online classes and students in traditional face-to-face classes in the Technology Administration program. In addition, the study was designed to examine the difference in course retention rates of students in the online classes as compared to the face-to-face classes. The third part of the study was designed to examine program retention rates of students who began the program in online classes and students who began the program in traditional face-to-face classes. This chapter begins with the descriptive statistics for the sample: gender, age, grades by gender, and course selection of students in online or face-to-face courses by gender. From the three research questions, research hypotheses were developed, and the results of statistical analyses used to test each hypothesis are presented. Descriptive Statistics Demographic data for the sample was collected from the student data system for 2002 through 2009. The descriptive statistics presented below include gender (n = 884), age (n = 880), grades by gender (n = 884) and course selection online or face-to-face by gender (n = 884). Table 2 describes the cross-tabulation of the frequencies for gender and of the sample selected for the study. The mean age for the sample tested was 31.06 years, with a standard deviation of 9.46 years. The age range of the sample was from 18 to 66 years. One participant did not report gender. Age was not available for three participants. 43 Table 2 Participant Age Group by Gender (n=880) Age Range By Years < 20 20-29 30-39 40-49 50-59 60-69 Female 0 198 121 62 29 3 Male 5 281 104 53 19 5 Note: Gender not reported for one participant; Age not reported for four participants Females = 413 Males = 467 Table 3 presents the frequency of course grades by gender and total number of students receiving each grade. Grades were distributed across the continuum, with slightly more females than males receiving A’s, more males than females receiving B’s, C’s and F’s, and an equal distribution of students receiving D’s. More males withdrew from classes than did females. 44 Table 3 Average Grades by Gender (n=884) Grades Female Male Total A 245 208 453 B 53 79 132 C 32 70 102 D 17 16 33 F 37 55 92 No Credit 1 0 1 Passing 0 1 1 Withdraw 25 42 67 Withdraw Failing 3 0 3 Total 413 471 884 Note: Gender not reported for one participant Table 4 presents the course selection patterns of male and female students. Overall, more students selected online courses than face-to-face courses. Females and males enrolled in online courses in equal numbers; however, proportionally more females (68.7%) chose the online instructional format instead of face-to-face compared with males (60.1%). 45 Table 4 Course Selection by Gender (n=884) Course Type Female Male Total Face-to-face 129 184 313 Online 284 287 571 Total 413 471 884 Note: Gender not reported for one participant Hypothesis Testing H1: There is a statistically significant difference in the course grades of students enrolled in online classes and students enrolled in a traditional classroom setting at the 0.05 level of significance. The sample consisted of 815 students enrolled in online and face-to-face Technology Administration courses at Washburn University. A two-factor analysis of variance was used to analyze for the potential difference in course grades due to delivery method (online and face-to-face), the potential difference due to instructor (instructors A and B), and the potential interaction between the two independent variables. Mean and standard deviation for grades were calculated by delivery type and instructor. Table 5 presents the descriptive statistics. The mean of grades by delivery showed no significant difference between online and face-to-face instruction. Additionally, no significant difference in mean grade was evident when analyzed by instructor. 46 Table 5 Means and Standard Deviations by Course Type and Instructor Course type Instructor Mean Standard Deviation n Face-to-face A 3.0690` 1.41247 29 B 2.9586 1.39073 266 Total 2.9695 1.39084 295 Online A 2.9024 1.52979 41 B 3.0271 1.35579 479 Total 3.0271 1.36911 520 Total A 2.9714 1.47414 70 B 3.0027 1.36783 745 Total 3.000 1.37635 815 The results of the two-factor ANOVA, presented in Table 6, indicated there was no statistically significant difference in grades due to delivery method (F = 0.078, p = 0.780, df = 1, 811). This test was specific for hypothesis 1. In addition, there was no statistically significant difference in grades due to instructor (F = 0.002, p = .967, df = 1, 811), and no significant interaction between the two factors (F = 0.449, p = 0.503, df = 1, 811). The research hypothesis was not supported. 47 Table 6 Two-Factor Analysis of Variance (ANOVA) of Delivery by Instructor df F p Delivery 1 0.148 0.780 Instructor 1 0.003 0.967 Delivery*Instructor 1 0.449 0.503 Error 811 Total 815 H2: There is a statistically significant difference in student course retention between students enrolled in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. The sample consisted of 885 students enrolled in TA 300 and TA 320 online and face-to-face courses. The hypothesis testing began with the analysis of the contingency data presented in Table 7. The data are organized with course selection (online or face-to-face) as the row variable and retention in the course as the column variable. Data were included in the retained column if a final grade was reported for participant. Participants who were coded as withdraw or withdraw failing were labeled as not retained. Chi-square analysis was selected to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Roberts, 2004). The result of the chi square testing (X2 = 2.524, p = .112, df = 1, 884) indicated there was no statistically significant difference between retention of students enrolled in online courses compared to students enrolled in face-to-face courses in the TA program. Additional results indicated that 93.92% (294/313) of the online students were retained, 48 compared to 90.89% (519/571) of the face-to-face students. The research hypothesis was not supported. Table 7 Course retention of online and face-to-face TA students Retained Not retained Total Face-to-face students 294 19 313 Online students 519 52 571 Total 813 71 884 H3: There is a statistically significant difference in program retention between students who begin the Technology Administration program in online courses and students who begin in face-to-face courses at the 0.05 level of significance. The sample consisted of 249 students enrolled in TA 300 in the online and face-to-face courses from Fall 2002 through Fall 2008. The hypothesis testing began with the analysis of the contingency data located in Table 8. The table is organized with course selection (online or face-to-face) as the row variable and program retention as the column variable. Data were included in the retention column if students had successfully met requirements for a Bachelors of Applied Science in Technology Administration or if they were enrolled in the program in Spring 2009. Data were included in the non-retained column if students had not fulfilled degree requirements and they were not enrolled in Spring 2009. Chisquare analysis was selected to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Roberts, 2004). 49 The result of the chi-square testing (X2 = .132, p = .717, df = 1, 249) indicated there was no statistically significant difference between the program retention rate of students who began the TA program in the online courses compared to the students who began the program in the face-to-face courses. Additional results showed that 91.57% (163/178) of students who began in online courses were retained compared to 92.96% (66/71) of students who began the TA program in face-to-face courses. The research hypothesis was not supported. Table 8 Program retention of online and face-to-face TA students Retained Not retained Total Face-to-face 66 5 71 Online 163 15 178 Total 229 20 249 Summary In this chapter, an introduction provided a summary of the analysis and statistical testing and in the order in which it was presented. This was followed by descriptive statistics of the sample, including age range of participants, grades by gender, and course selection by gender. Results from testing of H1 revealed no significant difference between course grades of online students and students enrolled in traditional face-to-face classes. Chisquare testing was utilized for testing of H2. Results indicated there was no significant 50 difference in course retention of students enrolled in online courses and students enrolled in traditional face-to-face courses. H3 was also tested utilizing chi-square testing. The results indicated no significant difference in program retention of students who began the TA program in online courses and students who began in traditional face-to-face courses. Chapter Five provides a summary of the study, discussion of the findings in relationship to the literature, implications for practice, recommendations for further research, and conclusions. 51 CHAPTER FIVE INTERPRETATION AND RECOMMENDATIONS Introduction In the preceding chapter, the results of the analysis were reported. Chapter Five consists of the summary of the study, an overview of the problem, purpose statement and research questions, review of the methodology, major findings, and findings related to the literature. Chapter Five also contains implications for further action and recommendations for further research. The purpose of the latter sections is to expand on the research into distance education, including implications for expansion of course and program delivery and future research. Finally, a summary is offered to capture the scope and substance of what has been offered in the research. Study Summary The online delivery of course content in higher education has increased dramatically in the past decade. Allen and Seaman (2007a) reported that almost 3.5 million students participated in at least one online course during the Fall 2006 term, a nearly 10% increase over the number reported in the previous year. They also reported a 9.7% increase in online enrollment compared to the 1.5% growth in overall higher education. As online delivery has grown, so has criticism of its efficacy. Online delivery of education has become an important strategy for the institution that is the setting of this study. The purpose of this study was three-fold. The first purpose of the study was to determine if there was a significant difference between the course grades of students participating in TA online courses and their traditional classroombased counterparts. The second purpose of the study was to determine if there was a 52 significant difference between course retention of students participating in TA online courses and their traditional classroom-based counterparts. A third purpose of the study was to determine if there was a significant difference between program retention of students who began the TA program in online courses and those who began the program enrolled in traditional face-to-face courses. The study was designed to expand the knowledge base concerning online education and its efficacy in providing baccalaureate degree completion opportunities. The research design was a quantitative study to compare course grades, course retention, and program retention of students enrolled in the online and traditional face-toface TA program at Washburn University. Archival data from the student system at Washburn University was utilized to compare online and traditional face-to-face students. In order to answer Research Question 1, a sample of students enrolled in TA 300 and TA 310 online and traditional face-to-face courses was analyzed. The sample included students entering the program in the Fall semesters of 2002, 2003, 2004, 2005, and 2006. Two instructors were responsible for concurrent instruction of both the online and faceto-face classes for the period analyzed. A two-factor analysis of variance was used to analyze for a potential difference in the dependent variable, course grades, due to delivery method (online and face-to-face), the instructor (instructors A and B), and the potential interaction between the two independent variables (Research Question 1). A chi-square test for differences among proportions was used to analyze both course and program retention (Research Questions 2 and 3). For Research Question 2, archived data from the Washburn University student system was analyzed for students enrolled in TA 300 and TA 310. Additional variables identified for this sample included 53 course selection and instructor (A or B). For Research Question 3, archived data from the Washburn University system was used, which identified students with declared Technology Administration majors who began the TA program enrolled in online and face-to-face courses. A single gatekeeper course (TA 300) was identified for testing. Two instructors (A and B) were responsible for instruction during the testing period. A two-factor ANOVA was utilized to test H1: There is a statistically significant difference in course grades of students participating in online courses and students enrolled in a traditional classroom setting at the 0.05 level of significance. ANOVA testing was utilized to account for the two delivery methods and two instructors involved for the period of the study. The results of the test indicated there was no statistically significant difference in grades due to delivery method. The results of the testing also indicated no statistically significant difference in grades due to instructor and no interaction between the two independent variables. The research hypothesis was not supported. To test the next hypothesis, chi-square testing was utilized. H2: There is a statistically significant difference in student course retention between students participating in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. The result of the chi-square testing indicated there was no statistically significant difference in course retention of students enrolled in online courses and students enrolled in face-to-face courses in the TA program. The research hypothesis was not supported. To test the final hypothesis, chi-square testing was also used. H3: There is a statistically significant difference in program retention between students who begin the 54 Technology Administration program in online courses and students who begin in face-toface courses at the 0.05 level of significance. The result of the chi-square testing indicated there was no statistically significant difference in the program retention rate of students who began the TA program in the online courses and students who began the program in the face-to-face courses. The research hypothesis was not supported. Testing found that course retention was high in both formats, leading to interpretation that higher results may be due to the age of participants or prior degree completion. The results found no significant difference in grades, course, or program retention for students in online TA courses and students enrolled in traditional face-to-face instruction. The implication of these results compared to current literature is discussed in the next section. Findings Related to the Literature Online education has become a strategy for higher education to provide instruction to students limited by distance or time, or who, for other reasons, do not wish to attend traditional classroom-based university classes. Additionally, online education allows higher education institutions to expand their geographic base. Institutions have utilized distance education for over a century to provide instruction, but it was only within the last two decades that instruction over the Internet had replaced correspondence, television, and video courses as the method of choice for delivery (Russell, 1999). Utilizing grades as a measure of achievement, meta-analyses conducted by Russell (1999), Shachar and Neumann (2003), and Machtmes and Asher (2002) found no significant difference in grades of online students and traditional classroom-based 55 students. These analyses utilized multiple studies of course information, comparing grades of online students and traditional face-to-face students, primarily utilizing t tests as the preferred methodology. The results of previous research were supported by the present study. Additionally, this study went further, analyzing data over more than one semester, controlling for the effect of different instructors. These results were contrary to the conclusion reached by Phipps and Merisotis (1999). The second purpose of the study was to determine if a significant difference existed between the course retention of students enrolled in online TA courses and students enrolled in face-to-face courses. Meta-analyses conducted by Phipps and Merisotis (1999) and Nora and Snyder (2009) concluded a much lower course retention rate in online students as compared to their face-to-face counterparts. The previous metaanalyses examined retention of online students and traditional face-to-face students in distinct courses, utilizing t tests as the primary methodology. The chosen method of t tests was used instead of the chi square testing due to the limitations of the studies to one course taught by one instructor, limited to one semester or cycle. Carr (2002) reported in The Chronicle of Higher Education that retention of online students was 50% less than that of traditional face-to-face students. Carr’s results were based on the examination of longitudinal retention data from universities as reported to the United States Department of Education. The results of the present study found no significant difference in the course retention rates. These results are supported by the findings of Carmel and Gold (2007) in which they reported no significant difference in course retention rates of online students compared to traditional face-to-face students in their analysis of students in multiple 56 courses in disciplines across a 4-year university. The present study expanded those results, examining course data in the same discipline over a 6-year period and controlling for delivery by two separate instructors. Research into program completion rates of AAS students has been conducted primarily in traditional university settings, including Townsend’s (2002) studies at the University of Missouri-Columbia. Townsend’s results showed a lower baccalaureate completion rate for students entering with an AAS than students who transferred to 4- year universities with an AA degree. Studies by Hogan (1997) of vocational-education programs also found a lower program completion rate for online students compared to students in traditional delivery vocational education programs. Analysis of the data in the current study showed no significant difference in program completion rate of students who began in online TA courses as compared to students who began the program in faceto-face courses. Conclusions The use of distance education for postsecondary instruction, primarily in the form of the Internet, has both changed and challenged the views of traditional university-based instruction. Multiple studies have been designed in an effort to examine whether online students have the same level of academic achievement as their traditional higher education peers. The present study agrees with the research indicating there is no statistically significant difference in the grades of online students and their face-to-face counterparts. In addition, with student retention an issue for all postsecondary institutions, the data from previous studies indicated a lower retention rate for online students than for their traditional face-to-face classmates. The current study contradicted 57 those arguments. In the following sections, implications for action, recommendations for research, and concluding remarks are addressed. Implications for Action As postsecondary institutions move into the 21st century, many have examined issues of student recruitment and retention in an effort to meet the demands of both their students and their communities. The majority of postsecondary institutions have initiated online education as a strategy to recruit students from beyond their traditional geographic areas. This study supported existing research utilizing grades as a measure of achievement and should alleviate doubt that online students are shortchanged in their education. The transition of existing face-to-face to courses to an online delivery model can be accomplished without sacrificing achievement of course and program goals. The study also examined course and program retention data, finding no significant differences between online and traditional students in the TA program. The findings of this study support the expansion of additional online courses and programs within the School of Applied Studies. Finally, this study can provide the basis for further action, including analyzing other programs and courses offered in the online format by the University. The analysis of other programs offered in an online delivery model would enhance further development of online courses and programs. Recommendations for Future Research Distance education delivery has expanded dramatically with the use of the Internet for online instruction. The present study could be continued in future years to measure the effects of specific curriculum delivery models and changes made to online 58 delivery platforms. In addition, the study could be expanded to include specific characteristics of student retention named in the literature, such as examining whether the age and entering GPA of students provides any insight into course and program retention. The study could also be expanded to include other universities with similar baccalaureate-degree completion programs and other disciplines. Because the body of research is limited concerning the baccalaureate-degree completion of students who begin their postsecondary education in career-oriented instruction, there is value in continuing to study baccalaureate completion rates, both in an online format and in more traditionally based settings. Concluding Remarks The current study examined a Technology Administration program that has been offered in both online and face-to-face format, utilizing data from Fall 2002 through Spring 2008. The TA program was developed to allow students who had completed an occupationally oriented AAS degree to complete a bachelor’s degree program. Three hypotheses were tested in this study, examining course grades, course retention, and program retention of students enrolled in online and face-to-face courses in Technology Administration. No significant difference was found for the three hypotheses. These results form a strong foundation for expanding online courses and programs at Washburn University. By addressing two of the major concerns of educators, achievement and retention, the study results allow expansion of online courses and programs to benefit from data-driven decision-making. Other institutions can and should utilize data to examine existing online course and program data. 59 REFERENCES Allen, I. E., & Seaman, J. (2003). Seizing the opportunity: The quality and extent of online education in the United States, 2002 and 2003. Needham, MA: The Sloan Consortium. Allen, I. E., & Seaman, J. (2005). Growing by degrees: Online education in the United States, 2005. Needham, MA: The Sloan Consortium. Allen, I. E., & Seaman, J. (2007a). Making the grade: Online education in the United States. Needham, MA: The Sloan Consortium Allen, I. E., & Seaman, J. (2007b). Online nation: Five years of growth in online learning. Needham, MA: The Sloan Consortium. Arle, J. (2002). Rio Salado College online human anatomy. In C. Twigg, Innovations in online learning: Moving beyond no significant difference (p. 18). Troy, NY: Center for Academic Transformation. Atkins, T. (2008, May 13). Changing times bring recruiting challenges at WU. Retrieved May 15, 2008, from CJOnline Web site at http://cjonline.com/stories/ 051308/loc_278440905.shtml Berge, Z., & Huang, L. P. (2004, May). A model for sustainable student retention: A holistic perspective on the student dropout problem with special attention to elearning. American Center for the Study of Distance Education. Retrieved April 17, 2009, from DEOSNEWS Web site at http://www.ed.psu.edu/acsde/deos/deosnews/deosarchives.asp 60 Bradburn, E., Hurst, D., & Peng, S. (2001). Community college transfer rates to 4-year institutions using alternative definitions of transfer. Washington, DC: National Center for Education Statistics. Brown, B. W., & Liedholm, C. (2002, May). Can Web courses replace the classroom in principles of microeconomics? The American Economic Review, 92, 444-448. California Community Colleges Chancellor's Office. (2009, April 20). Retention rates for community colleges. Retrieved April 20, 2009, from https://misweb.cccco.edu/mis/onlinestat/ret_suc_rpt.cfm?timeout=800 Carmel, A. & Gold, S. S.. (2007). The effects of course delivery modality on student satisfaction and retention and GPA in on-site vs. hybrid courses. Retrieved September 15, 2008, from ERIC database. (Doc. No. ED496527) Carnevale, D. (2006, November 17). Company's survey suggests strong growth potential for online education. The Chronicle of Higher Education , p. 35. Carr, S. (2000, February 11). As distance education comes of age, the challenge is keeping the students. The Chronicle of Higher Education , pp. 1-5. Cohen, A., & Brawer, F. (1996). The American community college. San Francisco: Jossey-Bass. Diaz, D. (2002, May-June). Online drop rates revisited. Retrieved April 8, 2008, from The Technology Source Archives Web site at http://www.technologysource.org/article/online_drop_rates-revisited/ Dougherty, K. J. (1992). Community colleges and baccalaureate attainment. The Journal of Higher Education, 63, 188-214. 61 Ebel, R., & Frisbie, D. (1991). Essentials of educational measurement. Prentice Hall: Englewood Cliffs, NJ. The Harvard guide. (2004). Retrieved May 20, 2008, from http://www.news.harvard.edu/guide Hogan, R. (1997, July). Analysis of student success in distance learning courses compared to traditional courses. Paper presented at Sixth Annual Conference on Multimedia in Education and Industry, Chattanoga, TN. Jacobs, J., & Grubb, W. N. (2003). The federal role in vocational education. New York: Community College Research Center. Joliet Junior College history. (2008). Retrieved May 20, 2008, from Joliet Junior College Web site at http://www.jjc.edu/campus_info/history/ Kansas Board of Regents. (2002-2003). Degree and program inventory. Retrieved May 14, 2008, from http://www.kansasregents.org Keeley, E. J., & House, J. D. (1993). Transfer shock revisited: A longitudinal study of transfer academic performance. Paper presented at the 33rd Annual Forum of the Association for Institutional Research, Chicago, IL. Knowles, M. S. (1994). A history of the adult education movement in the United States. Melbourne, FL: Krieger. Laanan, F. (2003). Degree aspirations of two-year students. Community College Journal of Research and Practice, 27, 495-518. Lynch, T. (2002). LSU expands distance learning program through online learning solution. T H E Journal (Technological Horizons in Education), 29(6), 47. 62 Machtmes, K., & Asher, J. W. (2000). A meta-analysis of the effectiveness of telecourses in distance education. The American Journal of Distance Education, 14(1), 27-41. Gilman, E. W., Lowe, J., McHenry, R., & Pease, R. (Eds.). (1998). Merriam-Webster’s collegiate dictionary. Springfield, MA: Merriam. Nash, R. (1984, Winter). Course completion rates among distance learners: Identifying possible methods to improve retention. Retrieved April 19, 2009, from Online Journal of Distance Education Web site at http://www.westga.edu/~distance/ojdla/winter84/nash84.htm National Center for Education Statistics. (2000). Distance education statistics 1999-2000. Retrieved March 13, 2008, from at http://nces.ed.gov/das/library/tables_listing National Center for Education Statistics. (2001). Percentage of undergraduates who took any distance education courses in 1999-2000 | INTRODUCTION Historically, postsecondary education in the United States was founded on the principles of the European system, requiring the physical presence of professors and students in the same location (Knowles, 1994). From 1626, with the founding of Harvard University (The Harvard Guide, 2004), to the development of junior colleges and vocational schools in the early 1900s (Cohen & Brawer, 1996; Jacobs & Grubb, 2003), the higher education system developed to prepare post-high school students for one of three separate tiers. The college and university system in the United States developed its own set of structures designed to prepare students for baccalaureate and graduate degrees. Junior colleges were limited to associate degrees, while vocational education institutions offered occupational certificates. In many cases, there was inadequate recognition of the postsecondary education offered at junior colleges and vocational education institutions, resulting in the inability of students to transfer to 4-year institutions (National Center for Education Statistics, 2006). In the mid-20th century, some junior colleges began to provide academic, vocational, and personal development educational offerings for members of the local communities. During this same period, junior or community colleges developed a role as transfer institutions for students who, because of time, preparedness, economics, or distance, could not begin their postsecondary education at a 4-year institution (Cohen & Brawer, 1996). Until the mid-1990s, the majority of transfer programs involved Associate of Arts (AA) and Associate of Science (AS) degrees. Associate of Applied Science (AAS) degrees were developed during the 1990s. The AAS degree was granted to those 2 who successfully completed the majority of their college program in vocational education. The creation of a variety of applied baccalaureate degrees allowed students who had previously thought of the AAS degree as a terminal program to complete a baccalaureate degree (Kansas Board of Regents, 2002-2003). Online education also became a strategy for students to access higher education in the 1990s (Allen & Seaman, 2007b). The proliferation of online courses alleviated some of the location-bound barriers to higher education, but online education was criticized as less rigorous than traditional classroom-based course work by traditional academicians. Russell attempted to address this argument with his 1999 meta-analysis of studies dating from the 1920s and covering multiple delivery models, including online education. Russell concluded there was no statistically significant difference in student achievement between courses offered online and those offered in the traditional classroom setting. Since the development of correspondence courses in the 1920s, researchers have attempted to ascertain if students participating in distance education are being shortchanged in their educational goals. No significant difference in grades has been found in the majority of studies designed to address this issue. Studies analyzing online student retention have shown significantly lower retention for online students. In the last 10 years, research studies have expanded to include variations of online education. These include strictly online, hybrid courses, Web-assisted classroom settings, and the traditional higher education course offered only as face-to-face instruction (Carmel & Gold, 2007). Online education continues to proliferate at the same time the number of secondary students in the United States overall is projected to increase (National Center 3 for Education Statistics [NCES], 2006). The projected increase of potential postsecondary students and online postsecondary options provides opportunities for increases in online education programs and courses. In 2000, NCES reported that over 65% of students in higher education were participating in online courses. In a 2007 study, Allen and Seaman estimated only 16% of those enrolled in online education courses are undergraduate students seeking their first degree, counter to the projected increase in traditional-age students. The majority of enrollees in online education are adults updating or advancing their credentials, creating an additional educational market for colleges and universities seeking to expand enrollment without adding physical space (Allen & Seaman, 2007a). For states and localities faced with a contradictory traditional-age enrollment decrease, these figures present an untapped market for higher education courses and programs. Background Researchers attempted to analyze the efficacy of distance education as far back as the 1920s when correspondence courses were created to meet the need of students not willing to attend a traditional classroom-based higher education setting. A meta-analysis of these studies resulted in “The No Significant Difference Phenomenon,” reported by Russell (2001). The results of over 355 studies were compiled, comparing various modes of delivery including correspondence, audio, television courses, and the newest wave of computer-facilitated instruction. Following analyses of studies completed prior to 2001, Russell concluded there was no difference in learning between students enrolled in distance education and those completing courses in the traditional setting. Studies completed since then have provided mixed results. Summers, Waigand, and Whittaker (2005) found there was no difference in GPA and retention between the 4 online and traditional classroom. Arle (2002) found higher achievement by online students, and Brown and Liedholm (2002) found GPA and student retention better in a traditional classroom setting. Student retention is an integral part of the student achievement conversation and is an issue for all forms of higher education. Degree-seeking students’ overall retention has been reported as less than 56% by NCES (2001). Long considered a problem in higher education, attention to the distance education model has shown even lower retention rates in online students than in students attending at the traditional college setting (Phipps & Meristosis, 1999). Research on different modalities, such as fully online and hybrid online courses, has produced mixed results (Carmel & Gold, 2007). No significant trend toward increased retention of students in any of the online modalities has been documented. Retention studies of transfer students have primarily included traditionally defined students transfering from a community college. Statistics have consistantly shown a lower retention rate for students transfering from a community college to a 4-year university than for students who began their post-high school education at a 4-year institution (NCES, 2006). Townsend’s studies of transfer students at the University of Missouri-Columbia also showed a lower baccalaureate retention rate for students who had completed an AAS degree than for students beginning their education at a 4-year institution (Townsend, 2002). Occupationally oriented bachelor’s degree completion programs are relatively new to higher education. Transfer programs in the liberal arts from community colleges to 4-year institutions were common by the 1990s. Townsend (2001), in her study 5 conducted at the University of Missouri–Columbia, observed the blurring of the lines between non-transferrable occupationally oriented undergraduate degrees and undergraduate degrees and certificates that were easily transferred. The study conducted by Townsend was among the first to recognize that many students who began their education at community and technical colleges had bachelor’s degree aspirations that grew after their completion of an occupationally-oriented degree. Laanan proposed that the increase in institutions offering AAS degrees necessitated new ways to transfer undergraduate credits (2003). The setting of this study is a medium-sized Midwestern campus located in Topeka, Kansas. Washburn University enrolls approximately 6000 students a year in undergraduate and graduate programs, including liberal arts, professional schools, and a law school (Washburn University, 2008). The Technology Administration (TA) program selected for the present study began in the 1990s as a baccalaureate degree completion program for students who had received an occupationally oriented associate degree at a Kansas community college or through Washburn’s articulation agreement with Kansas vocational-technical schools. This program provided students who previously had obtained an Associate of Applied Science degree in an occupational area an opportunity to earn a bachelor’s degree. Peterson, Dean of Continuing Education, Washburn University, stated that in early 1999, Washburn University began online courses and programs at the behest of a neighboring community college (personal communication, April 18, 2008). Washburn was asked to develop an online bachelor’s degree completion program for students graduating from community colleges and technical colleges with an Associate of Applied 6 Science degree. The TA program was among the first programs to offer the online bachelor’s degree completion option. The TA program offered its first online courses in Spring 2000. Online education at Washburn expanded to other programs and courses, to include over 200 courses (Washburn University, 2008). The original online partnership with two community colleges expanded to include 16 additional community colleges and four technical colleges in Kansas, as well as colleges in Missouri, California, Wisconsin, South Carolina, and Nebraska (Washburn University, 2008). An initial study in 2002 of student’s course grades and retention in online courses offered at Washburn showed no significant difference between students enrolled in online courses and students enrolled in traditional face-to-face course work (Peterson, personal communication, April 18, 2008). No studies of program retention have been completed. In 2008, Atkins reported overall enrollment at Washburn University decreased 6.7% from Fall 2004 to Fall 2008, from 7400 to 6901 students. During the same period, online course enrollment patterns increased 65%, from 3550 students to 5874 in 2007- 2008 (Washburn University, 2008). Atkins also reported that between 1998 and 2008, the ratio of traditional post-high school age students to nontraditional students enrolling at Washburn University reversed from 40:60 to 60:40. The shift in enrollment patterns produced an increase in enrollment in the early part of the 21st century; however, Washburn University anticipated a decrease in high school graduates in Kansas through 2016, based on demographic patterns of the state. The state figures are opposite the anticipated increase of traditional-age students nationally (NCES, 2008). The increase in 7 distance education students in relation to the anticipated decline in traditional-age students provided the focus for the study. Purpose of the Study Online education has become an important strategy for the higher education institution that was the setting of this study. First, the purpose of the study was to determine if there was a significant difference between the course grades of students participating in TA online courses and their traditional classroom-based counterparts. The second purpose of the study was to determine if there was a significant difference between course retention of students participating in TA online courses and their traditional classroom-based counterparts. The second part of the study was a replication of studies comparing modes of online course delivery to traditional classroom-based instruction (Carmel & Gold, 2007; Russell, 1999). A third purpose of the study was to determine if there was a significant difference between program retention of students who began the TA program in online courses and those who began the program enrolled in traditional face-to-face courses. The study’s purpose was to expand the knowledge base concerning online education to include its efficacy in providing baccalaureate degree completion opportunities. Research Questions Roberts (2004) stated research questions guide the study and usually provide the structure for presenting the results of the research. The research questions guiding this study were: 8 1. Is there is a statistically significant difference between students’ grades in online classes and traditional face-to-face classes? 2. Is there a statistically significant difference between course retention rates in online classes and traditional face-to-face classes? 3. Is there a statistically significant difference between program retention for students entering the program enrolled in online classes and students entering the program enrolled in traditional face-to-face classes? Overview of the Methodology A quantitative study was utilized to compare grades by course, course retention, and program retention of students enrolled in the online and traditional face-to-face TA program at Washburn University. Archival data from the student system at Washburn University were utilized from comparative online and traditional face-to-face classes in two separate courses. In order to answer Research Question 1, a sample of 885 students enrolled in online and traditional face-to-face courses was identified. The sample included students entering the program in the Fall semesters of 2002, 2003, 2004, 2005, and 2006 in both the online and traditional face-to-face classes. Two instructors were responsible for concurrent instruction of both the online and face-to-face classes for the period analyzed. A two-factor analysis of variance was used to analyze for the potential difference in the dependent variables, course grades due to delivery method (online and face-to-face), instructor (instructors A and B), and the potential interaction between the two independent variables (Research Question 1). A chi-square test for differences among proportions was used to analyze course and program retention (Research Questions 2 and 3). 9 Delimitations Roberts (2004) defined delimitations as the boundaries of the study that are controlled principally by the researcher. The delimitations for this study were 1. Only data from 2002 through 2008 from Technology Administration online and face-to-face courses were utilized. 2. The study was confined to students enrolled at Washburn University in the Technology Administration program. 3. Only grades and retention were analyzed. Assumptions Assumptions are defined as those things presupposed in a study (Roberts, 2004). The study was based on the following assumptions: 1. Delivery of content was consistent between online and face-to-face courses and instructors, 2. Course objectives were the same for paired online and traditional face-toface courses, 3. All students enrolled in the TA program met the same criteria for admission to the University, 4. All data entered in the Excel spreadsheets were correct, 5. All students enrolled in the TA program met the same criteria for grade point average and program prerequisites. 10 Definitions The following terms are defined for the purpose of this study: Distance education. Education or training courses delivered to remote locations via postal delivery, or broadcast by audio, video, or computer technologies (Allen, 2007). Dropout. A dropout is defined as a student who has left school and discontinued studies (Merriam-Webster's Collegiate Dictionary, 1998). Face-to-face delivery. This is a course that uses no online technology; content is delivered in person, either in written or oral form (Allen, 2007). Hybrid course. This course is a blend of the online and face-to-face course. A substantial proportion of the content is delivered online, typically using some online discussions and some face-to-face meetings (Allen, 2007). Online course. This defines a course where most or all of the content is delivered online via computer technologies. Typically, there are no face-to-face meetings (Allen, 2007). 2+2 PLAN. The Partnership for Learning and Networking is a collaborative set of online 2+2 baccalaureate degree programs developed by Washburn University. The programs require completion of an associate degree from one of the partner community or technical colleges (Washburn University, 2008). Retention. This term refers to the completion of a course by receiving a letter grade in a course, or a certificate of completion or degree for program completion (Washburn University, 2008). Web-assisted. A course that uses Web-based technology to facilitate what is essentially a face-to-face course (Allen, 2007). 11 Organization of the Study This study consists of five chapters. Chapter One introduced the role of distance education in higher education. Chapter One included the background of the study, the research questions, overview of the methodology, the delimitations of the study, and the definition of terms. Chapter Two presents a literature review, which includes the history of occupational postsecondary education, distance education, and studies relating to grades and retention of students involved in distance education. Chapter Three describes the methodology used for the research study. It includes the selection of participants, design, data collection, and statistical procedures of the study. Chapter Four presents the findings of the research study. Finally, Chapter Five provides a discussion of the results, conclusions, and implications for further research and practice. 12 CHAPTER TWO LITERATURE REVIEW This chapter presents the background for research into the efficacy of distance education in the delivery of higher education. Research studies have focused primarily on grades as a measure of the quality of distance education courses as compared to traditional face-to-face instruction. Utilizing grades has produced a dividing line among education researchers concerning the use of distance education as a delivery model. Retention in distance education has focused primarily on single courses, with little program retention data available. Data from retention studies in higher education have focused primarily on the traditional 4-year university student. Retention studies of community college students have produced quantitative results; however, these studies have been directed at community college students who identify themselves as transfer students early in their community college careers. Retention studies of students enrolled in occupationally oriented programs are limited. Statistical data of higher education shows an increased use of distance education for traditional academic courses as well as occupationally oriented courses. The increase in distance education courses and programs has provided a new dimension to studies of both grades and retention. The recognition of this increase, as well as questions concerning its impact on student learning and retention, produced the impetus for this study. The following review of the literature represents the literature related to this research study. Through examination of previous research, the direction of the present study was formulated. Specifically, the chapter is organized into four sections: (a) the 13 history of occupational transfer programs; (b) the history and research of distance education, including occupational transfer programs utilizing distance education; (c) research utilizing grades as an indicator of student learning in online education; and (d) research focusing on student retention in higher education, including student retention issues in transfer education and online transfer courses and programs. History of Occupational Transfer Programs The measure of success in higher education has been characterized as the attainment of a bachelor’s degree at a 4-year university. Occupationally oriented education was considered primarily a function of job preparation, and until the 1990s was not considered transferrable to other higher education institutions. Occupational transfer programs are a recent occurrence within the postsecondary system that provides an additional pathway to bachelor’s degree completion. Historically, the postsecondary experience in the United States developed as a three-track system. Colleges were established in the United States in 1636 with the founding of Harvard College (The Harvard Guide, 2004). Junior colleges were first founded in 1901 as experimental post-high school graduate programs (Joliet Junior College History, 2008). Their role was initially as a transfer institution to the university. When the Smith-Hughes Act was passed in 1917, a system of vocational education was born in the United States (Jacobs & Grubb, 2003), and was designed to provide further education to those students not viewed as capable of success in a university setting. Vocational education, currently referred to as occupational or technical education, was not originally designed to be a path to higher education. The first programs were designed to help agricultural workers complete their education and increase their skills. 14 More vocational programs were developed during the early 20th century as industrialization developed and as increasing numbers of skills were needed by workers in blue-collar occupations (Jacobs & Grubb, 2003). In the mid-20th century, some junior colleges expanded their programs beyond academic selections to provide occupational development and continuing education. Because of the geographic area from which they attracted students, junior colleges developed a role as “community” colleges. They also solidified their role as transfer institutions for students who, because of time, preparedness, economics, or distance, could not begin their postsecondary education at a 4-year institution (Cohen & Brawer, 1996). Until the mid-1990s, the majority of transfer programs to 4-year universities involved traditional academic degrees, including the Associate of Arts (AA) and Associate of Science (AS) degrees. Occupational programs and continuing education were viewed as terminal and non-transferrable. In 1984, Congress authorized the Carl Perkins Vocational and Technical Education Act (P.L. 98-524). In the legislation, Congress responded to employers’ concerns about the lack of basic skills in employees by adding academic requirements to vocational education legislation. Vocational program curriculum was expanded to include language arts, mathematics, and science principles, and the curriculum reflected the context of the program. The Secretary’s Commission on Achieving Necessary Skills (SCANS) was created in 1990 to determine the skills young people need to succeed in the world of work (U.S. Department of Labor, 2000). In the second Carl Perkins reauthorization in 1990 (P.L. 105-332), Congress responded to the report, which targeted academic and job skills, by outlining a seamless system of vocational and academic 15 education to prepare vocational students to progress into and through higher education. This emphasis led to the development of Associate of Applied Science (AAS) degrees during the 1990s. Granted to those who have successfully completed programs in the applied arts and sciences for careers, AAS degrees were seen as terminal (Kansas Board of Regents, 2002-2003). But as one goal was attained, conversation turned to creating a pathway from occupational associate degrees to bachelor’s degree completion. The desire of students to continue from technical degrees to a baccalaureate was not a new idea. In a paper presented in 1989 to the American Technical Association national conference, TrouttErvin and Morgan’s overview of 2+2 programs showed acceptance of AAS degrees at traditional universities was generally non-existent. Their suggestion for an academic bridge from early technical education to baccalaureate programs highlighted programs accepting AAS degrees toward baccalaureate completion were an exception rather than a rule (Troutt-Ervin & Morgan, 1989). It was not until the late 1990s that applied baccalaureate degrees recognized credits from technical degree students who had previously thought of themselves in a terminal program to complete their baccalaureate degree (Wellman, 2002). Despite the advance of recognition of AAS degrees, standard definitions of transfer students continued to exclude students who completed technical programs. The U.S. Department of Education did not include students receiving an Associate of Applied Science degree in the definition of students preparing for transfer to 4-year colleges (Bradburn, Hurst, & Peng, 2001; Carnevale, 2006). Most states had comparable policies in place concerning core academic curriculum, articulation agreements, transfer of credit, 16 and statewide transfer guides. There was no general recognition of occupational credit transfer. Only a few states, including Kansas, Missouri, and Washington, allowed credits earned in occupationally oriented degrees to transfer to 4-year institutions (Townsend, 2001). No state had set clear goals for the transference of occupational credits between institutions or for the state as a whole (Wellman, 2002). Despite the lack of recognition of occupational transfer credit at the federal level, a new definition of transfer education had emerged. Initially defined as the general education component of the first 2 years of a baccalaureate, the definition of transfer education now included any courses that transferred to a 4-year college, regardless of the nature of the courses (Townsend, 2001). The line between vocational schools, community colleges, and 4-year institutions blurred in the United States as employers and students increasingly made business decisions regarding education and workforce development. Employers increasingly asked for employees with academic and technical skills, as well as critical thinking skills and personal responsibility (U.S. Department of Labor, 2000). Returning students themselves were more attuned to the demands of the 21st century workforce. Their desire to return to higher education, coupled with the economy and the variety of options available to them, required a more adaptive higher education system (Carnevale, 2006). There was growing demand among new and returning students for higher education opportunities responsive to their needs. The expanding needs of the returning student provided opportunities for higher education to respond by utilizing different delivery models. 17 Distance Education Online education became a strategy for postsecondary institutions when the first correspondence courses were initiated with the mail service in the early 20th century (Russell, 1999). As various technologies emerged, distance education utilized television and video models, in addition to paper-based correspondence courses. The expansion of distance education utilizing computer technologies renewed academic debate over the efficacy of the delivery model. Online education utilizing the Internet became a significant factor in the 1990s, prompting renewed evaluation of the use of distance learning opportunities (Russell, 1999, Phipps & Meristosis, 1999). In 1999–2000, the number of students who took any distance education courses was 8.4% of total undergraduates enrolled in postsecondary education (NCES, 2000). In 2000, the report of the Web-Based Education Commission to the President and Congress concluded that the Internet was no longer in question as a tool to transform the way teaching and learning was offered. The Commission recommended that the nation embrace E-learning as a strategy to provide on-demand, high-quality teaching and professional development to keep the United States competitive in the global workforce. They also recommended continued funding of research into teaching and learning utilizing web-based resources (Web-Based Education Commission, 2000). The acceptance of the importance of the Internet for delivery of higher education opened new opportunities for research and continued the academic debate of the quality of instruction delivered in online education courses and programs. In a longitudinal study from 2002-2007, The Sloan Consortium, a group of higher education institutions actively involved in online education, began studies of online 18 education in the United States over a period of 5 years. In the first study, researchers Allen and Seaman (2003) conducted polls of postsecondary institutions involved with online education and found that students overwhelming responded to the availability of online education, with over 1.6 million students taking at least one online course during the Fall semester of 2002. Over one third of these students took all of their courses online. The survey also found that in 2002, 81% of all institutions of higher education offered at least one fully online or blended course (Allen & Seaman, 2003). In their intermediate report in 2005, Allen and Seaman postulated that online education had continued to make inroads in postsecondary education, with 65% of schools offering graduate courses and programs face-to-face also offering graduate courses online. Sixty-three percent of undergraduate institutions offering face-to-face courses also offered courses online. From 2003 to 2005, the survey results showed that online education, as a long-term strategy for institutions, had increased from 49% to 56%. In addition, core education online course offerings had increased (Allen & Seaman, 2005). In Allen and Seaman’s final report (2007b) for the Sloan Consortium, the researchers reported that almost 3.5 million students participated in at least one online course during the Fall 2006 term, a nearly 10% increase over the number reported in the previous year. Allen and Seaman also reported a 9.7% increase in online enrollment, compared to the 1.5% growth in overall higher education. They found by 2007, 2-year institutions had the highest growth rates and accounted for over the half the online enrollments in the previous 5 years. The researchers concluded, based on a survey 19 conducted as part of the research, institutions believed that improved student access was the top reason for offering online courses and programs (Allen & Seaman, 2007b). Community colleges began embracing distance education in the 1920s as part of their mission to provide low-cost, time-effective education. Community colleges initially provided correspondence courses by mail, but later switched to television and video courses as technology improved (Cohen & Brawer, 1996). In 2001, over 90% of public 2- year colleges in the United States provided distance education courses over the Internet (NCES, 2001). Vocational education, by the nature of its instructional format, was among the last of the educational institutions to participate in distance education. Because of the kinesthetic nature of instruction, vocational education leaders began investigating distance education opportunities in the 1990s, relying on the method to provide only the lecture portion of instruction. By 2004, only 31% of students enrolled in vocational schools had participated in some form of distance education during their program of study (NCES, 2005). In 2008, hands-on instruction in programs such as automobile mechanics and welding, and the clinical portion of health occupations programs, continued to be taught in the traditional classroom setting (NCES, 2008). Analysis of data reported by the NCES indicated that distance education had become a staple for higher education institutions. At both the 4-year and 2-year university level, over 65% of institutions offered more than 12 million courses in 2006-2007 by distance education. While vocational education had traditionally been more hands-on, distance education had become more prevalent in providing opportunities for students to participate in components of the system over the Internet (NCES, 2008). 20 Distance education became the prevalent strategy for higher education institutions to expand their services to new and returning students, without the financial implications of capital expansion. Higher education utilized the strategy to market to students outside their traditional geographic reach by utilizing the power of the Internet. The increasing demand from students of all ages for online opportunities provided new ground for the expansion of higher education opportunities. Grades as an Indicator of Quality of Student Learning The grading system in the United States educational system has served as an indicator of knowledge for over 100 years. Educators have utilized high school grades as a sorting mechanism in American schools to determine postsecondary opportunities. Modern society has accepted honors attainment, graduation honors, and course grades as an indicator of knowledge acquisition in postsecondary education. Stray (2001) reported that the use of grading in schools can be traced to the industrial revolution and the development of factories. William Farish of Cambridge University developed the first grading system in higher education in 1792 (Stray, 2001). Farish mimicked the system established by factories of the time: grade A being the best. The thought was that Farish employed the grading system in order to teach more students, an aberration at that time when instructors rarely had more than a few. The demand for more higher education opportunities prompted Farish to open his class to more students, and as such, led to his use of a sorting system. This was the first known record of grading utilized in classrooms to measure student achievement (Stray, 2001). 21 Smallwood (1935) reported the first grading in higher education at Yale University in 1792. Stiles, President of Yale University, directed the use of the scale in the late 18th century. However, Smallwood noted it was not until 1813 that any record of grades or marking appeared. Using a scale of 100, philosophy and mathematic professors instituted the first use of a marking instrument in the 1800s at Harvard. Smallwood noted early systems were experimental, utilizing different numerical scales, with no standardized system in place between higher education institutions. It was not until the late 1800s that faculty began using descriptors, such as A and B, to rank students according to a predetermined numerical scale (Smallwood, 1935). Experimentation with evaluation of achievement continued into the early 20th century, when educational psychologists, including Dewey and Thorndike, attempted to compare grading scales with intelligence testing. Thorndike’s philosophy of standardized testing and grading survived the 20th century, and his quote, “Whatever exists at all exists in some amount” (Thorndike, 1916, as cited in Ebel & Frisbie, p. 26) has been utilized in educational measurement textbooks as a validation of the use of standards of measurement to measure achievement (Ebel & Frisbie, 1991). The use of grades expanded to community colleges, high schools, and elementary schools in the early 1900s (Pressey, 1920). The use of grades throughout the educational system is fairly standardized today with the 4.0 scale. It is this standardization that allows comparison of grades as achievement between educational levels and institutions (Ebel & Frisbie, 1991) and allows grades to be utilized as a measure for comparison of educational achievement. 22 Researchers analyzing the success of community college transfer students have traditionally studied the grades of the traditional transfer student with an AA or AS degree. Keeley and House’s 1993 study of sophomore and junior transfer students at Northern Illinois University analyzed “transfer shock” (p. 2) for students matriculating from community colleges. The researchers found students who transferred from a community college obtained a grade point average significantly lower in their first semester than did students who began their college career at a 4-year institution. However, the results of the longitudinal studies showed that transfer students who persisted to graduation showed an equivalent GPA at baccalaureate completion (Keeley & House, 1993). Students who transferred from occupationally oriented degree programs typically were not included in traditional studies of transfer students. While the research in general does not include AAS students in traditional transfer data, limited conclusions were available comparing AAS students to traditional 4-year college attendees. Townsend’s study at the University of Missouri-Columbia (2002) showed no difference in grades at baccalaureate graduation between students with an AA/AS degree and students with an AAS degree. The use of grades as an indicator of the level of student achievement has been relied upon by studies comparing traditional classroom instruction and distance instruction. Research analyzing the effectiveness of student learning in distance education began with the first correspondence courses offered utilizing the mail service (Russell, 1999). The study of effectiveness of correspondence courses expanded to include new technologies, such as television and video courses, and increased with the proliferation of 23 online educational offerings. Researchers continued to challenge the effectiveness of learning methods not delivered in traditional higher education settings. In 1991, Russell reviewed over 355 studies, dating from the 1930s and continuing through the late 1980s, and found no significant difference in student learning using any form of distance education, as compared with students in classroom-based instruction (Russell, 1999). Russell’s conclusion formed the basis for a series of works collectively known as “No Significant Difference.” Russell’s conclusion from his studies follows: The fact is the findings of comparative studies are absolutely conclusive; one can bank on them. No matter how it is produced, how it is delivered, whether or not it is interactive, low tech or high tech, students learn equally well with each technology and learn as well as their on-campus, face-to-face counterparts even though students would rather be on campus with the instructor if that were a real choice. (p. xviii) Overwhelmingly, studies have supported Russell’s conclusions, including Neuhauser’s (2002) study of traditional face-to-face education and online education in a business communications class at a large urban university in North Carolina. Neuhauser concluded there was no significant difference in pre- and post-test scores of students enrolled in online and traditional communications classes. In addition, Neuhauser found no significant difference in final grades, homework grades, and grades on research papers, even though learners in the online course were significantly older than were learners in the traditional face-to-face section. The Summers et al. (2005) research included a comparison of student achievement and satisfaction in an online versus a traditional face-to-face statistics class. 24 The study, conducted at the University of Missouri-Columbia, included undergraduate nursing students who were tested on both their pre- and post-course knowledge of statistics. Their results indicated that utilizing grades as an indicator of knowledge showed no significant difference between the online and traditional classroom students. In their meta-analysis, Machtmes and Asher (2002) reviewed 30 studies and concluded there did not appear to be a difference in achievement, as measured by grades, between distance and traditional learners. As technology use continued to evolve in online education, various studies were conducted to determine whether different delivery methods created a difference in the grades of online students compared to their face-to-face counterparts. A study conducted by Carmel and Gold (2007) supported Russell’s original conclusion by analyzing specific types of online platforms and delivery models. Carmel and Gold’s study included hybrid and traditional classroom-based instruction. They analyzed results from 164 students in 110 courses and found no significant difference in student achievement based on grades between students enrolled in either delivery method. Additional studies supporting Russell’s theory have crossed multiple content areas and delivery models. Brown and Liedholm’s (2002) study at Michigan State University included microeconomics students in virtual, hybrid, and traditional classroom-based instruction. The study included 389 students in the traditional setting, 258 in the hybrid delivery section and 89 students enrolled in online education. No significant difference in student learning as measured by end of course grades was found. Research also showed type of course discipline is not affected by the online delivery model. Schulman and Simms (1999) compared pretest and posttest scores of 25 students enrolled in an online course and a traditional course at Nova Southeastern University. The researchers compared 40 undergraduate students enrolled in online courses and 59 undergraduate students enrolled in the classroom setting of the same course. Results indicated that the students who select online courses scored higher than traditional students scored on the pretest results. However, posttest results showed no significant difference for the online students versus the in-class students. Schulman and Simms concluded that online students were learning equally as well as their classroombased counterparts. Reigle’s (2007) analysis across disciplines at the University of San Francisco and the University of California found no significant difference between online and face-to-face student grade attainment. Shachar and Neumann (2003) conducted a meta-analysis that estimated and compared the differences between the academic performance of students enrolled in distance education compared to those enrolled in traditional settings over the period from 1990-2002. Eighty-six studies containing data from over 15,000 participating students were included in their analysis. The results of the meta-analysis showed that in two-thirds of the cases, students taking courses by distance education outperformed their student counterparts enrolled in traditionally instructed courses. Lynch, during the use of the “Tegrity” system, a brand-specific online platform at Louisiana State University, found that students’ grades were slightly better after utilizing the technology than when the traditional approach was used (Lynch, 2002). Initial results of a University of Wisconsin-Milwaukee study of 5000 students over 2 years indicated that the U-Pace online students performed 12% better than their traditional Psychology 101 counterparts on the same cumulative test (Perez, 2009). Arle’s (2002) study found 26 students enrolled in online human anatomy courses at Rio Salado College scored an average of 6.3% higher on assessments than the national achievement average. Students were assessed using a national standardized test generated by the Human Anatomy and Physiology Society, whose norming sample is based entirely on traditional classroom delivery (Arle, 2002). In a study conducted by Stephenson, Brown, and Griffin (2008), comparing three different delivery styles (traditional, asynchronous electronic courseware, and synchronous e-lectures), results indicated no increased effectiveness of any delivery style when all question types were taken into account. However, when results were analyzed, students receiving traditional lectures showed the lowest levels on questions designed to assess comprehension. Research found supporters in higher education academic leaders. In a 2006 survey of Midwestern postsecondary institutions concerning their online offerings, 56 % of academic leaders in the 11 states rated the learning outcomes in online education as the same or superior to those in face-to-face instructional settings. The proportion of higher education institutions believing that online learning outcomes were superior to those for face-to-face outcomes was still relatively small, but had grown by 34% since 2003, from 10.2 to 13.7 % (Allen & Seaman, 2007b). This belief added merit to the conclusions supported by Russell and others. Russell’s (1999) “no significant difference” conclusion had its detractors. The most commonly cited is Phipps and Merisotis (1999), who reviewed Russell’s original meta-analysis (1999) and reported a much different conclusion. They concluded that the overall quality of the original research was questionable, that much of the research did 27 not control for extraneous variables, and therefore it could not show cause and effect. They included in their findings evidence that the studies utilized by Russell (2000) in the meta-analysis did not use randomly selected subjects, did not take into effect the differences among students, and did not include tests of validity and reliability. The Phipps and Merisotis (1999) analysis included the conclusion that research has focused too much on individual courses rather than on academic programs, and has not taken into account differences among students. They postulated that based on these conclusions, there is a significant difference in the learning results, as evidenced by grades, of students participating in distance education as compared to their classroombased peers. Their analysis of Russell’s original work questioned both the quality and effectiveness of research comparing distance and traditional education delivery. While there has been ongoing conjecture that online education students are not receiving an equivalent learning experience compared to their traditional classroom counterparts, studies utilizing grades as an indicator of student learning have produced little evidence of the disparity. The incidence of studies showing significant negative differences in grades of online learners is small. Higher education institutions have indicated their support for online education, and its continued growth has allowed studies such as the present research to contribute to ongoing dialogue. Student Retention in Postsecondary Education Persistence and retention in higher education is an issue that has intrigued researchers for over 50 years. Quantitative studies conducted in the mid-20th century produced data that caused researchers to look at low retention rates in higher education 28 and search for answers. This question has continued to consume researchers and higher education institutions. In 1987, Tinto attempted to summarize studies of individual student retention in higher education by proposing a theory to allow higher education administrators to predict success and support students (Tinto, 1987). Tinto’s model of student engagement has been in use for over 20 years as higher education administrators and faculty attempt to explain student retention issues at universities and colleges. Tinto’s model primarily focused on factors of student engagement: How students respond to instructors, the higher education community itself, and students’ own engagement in learning are the primary factors Tinto theorized as determining the student’s retention. In the concluding remarks to his 1987 treatise on retention, Tinto acknowledged that persistence in higher education is but one facet of human growth and development, and one that cannot necessarily be attributed to a single factor or strategy. Tinto’s (1987) original study of student retention included the observation that student retention is a complicated web of events that shape student leaving and persistence. He observed that the view of student retention had changed since the 1950s, when students were thought to leave due to lack of motivation, persistence, and skills, hence the name dropout. In the 1970s, research began to focus on the role of the environment in student decisions to stay or leave. In the 1990s, Tinto proposed that the actions of the faculty were the key to institutional efforts to enhance student retention (Tinto, 2007). This was a significant addition to his theory, placing the cause on the instructor instead of the student, and it has done much to influence retention strategies 29 utilized in higher education institutions (Tinto, 2007). Tinto’s studies have driven research in both traditional retention studies and those involving distance education. Studies of the persistence of the postsecondary student routinely focus on 4-year postsecondary education. It is only within the last 20 years that persistence studies have included community college students and occupational students, acknowledging that their reasons for entering the postsecondary community are different from the traditional 4- year higher education participant (Cohen & Brawer, 1996). With different avenues to a baccalaureate degree more prevalent, the research into college persistence has expanded to include other types of programs and students. Postsecondary student retention rates routinely utilize data from longitudinal studies of students entering in a Fall semester and completing a bachelor’s program no more than 6 years later (NCES, 2003). The National Center for Education Statistics reported that 55% of those seeking a baccalaureate degree would complete in 6 years (NCES, 2003). The report acknowledged institutions are unable to follow students who transfer to other institutions; they are able to report only the absence of enrollment in their own institution. Research has also found a large gap between community college entrants and 4- year college entrants in rates of attaining a bachelor’s degree. Dougherty (1992) reported that students entering community college receive 11 to 19% fewer bachelor’s degrees than students beginning at a 4-year university. Dougherty postulated that the lower baccalaureate attainment rate of community college entrants was attributable to both their individual traits and the institution they entered (Dougherty, 1992). 30 Studies of student retention of community college also vary based on the types of students. Community college retention rates are routinely reported as lower than traditional 4-year institutions (NCES, 2007). Cohen and Brawer (1996) attributed the differences in retention to the difference in the mission. In many instances, students did not enroll in a community college in order to attain a degree (Cohen & Brawer, 1996). The most recent longitudinal study in 1993 showed a retention rate of 55.4% of students after 3 years (NCES, 2001). Of community college students, only 60.9% indicated a desire to transfer later to a baccalaureate degree completion program (NCES, 2003). While retention data collected by the federal government (NCES, 2003) did not include students with an AAS degree, Townsend’s studies of the transfer rates and baccalaureate attainment rates of students in Missouri who had completed an Associate of Arts and students who had completed an Associate of Applied Science degree was 61% compared to 54% (Townsend, 2001). Vocational or occupational programs have reported retention rates as “program completion,” a definition involving completion of specific tasks and competencies instead of grades and tied to a limited program length. This state and federal requirement indicates program quality and ensures continued federal funding. In 2001, the U.S. Department of Education reported a 60.1% completion rate of postsecondary students enrolled in occupational education (NCES, 2007). Until 1995, the reasons for students leaving was neither delineated nor reported; it was not until federal reporting requirements under the Carl Perkins Act of 1994 that institutions were required to explore why students were not retained in vocational programs (P.L. 105-332). 31 Distance education provided a new arena for the study of student persistence. Theorists and researchers have attempted to utilize Tinto’s model of student persistence to explain retention issues involved with distance education. However, Rovai (2003) analyzed the differing student characteristics of distance learners as compared to the traditional students targeted by Tinto’s original models and concluded that student retention theories proposed from that population were no longer applicable to distance education learners. Rovai proposed that distance educators could address retention in ways that traditional higher education has not. He suggested that distance educators utilize strategies such as capitalizing on students’ expectations of technology, addressing economic benefits and specific educational needs to increase student retention in courses (Rovai, 2003). The expanded use of technology created a distinct subset of research into student retention issues. In 2004, Berge and Huang developed an overview of models of student retention, with special emphasis on models developed to explain the retention rates in distance education. Their studies primarily focused on the variables in student demographics and external factors, such as age and gender, which influence persistence and retention in online learning. Berge and Huang found that traditional models of student retention such as Tinto’s did not acknowledge the differences in student expectations and goals that are ingrained in the student’s selection of the online learning option. Other researchers have attempted to study retention issues specifically for online education. In a meta-analysis, Nora and Snyder (2009) found the majority of studies of online education focused on students’ individual characteristics and individual 32 perceptions of technology. Nora and Snyder concluded that researchers attempt to utilize traditional models of student engagement to explain student retention issues in distance or online learning courses, with little or no success. This supported Berge and Huard’s conclusions. Nora and Snyder (2009) also noted a dearth of quantitative research. Few quantitative studies exist that support higher or equal retention in online students compared to their classroom-based counterparts. One example is the Carmel and Gold (2007) study. They found no significant difference in student retention rates between students in distance education courses and their traditional classroom-based counterparts. The study utilized data from 164 students, 95 enrolled in classroom-based courses and 69 enrolled in a hybrid online format. Participants randomly self-selected and were not all enrolled in the same course, introducing variables not attributed in the study. The majority of quantitative studies instead concluded there is a higher retention rate in traditional classrooms than in distance education. In the Phipps and Merisotis (1999) review of Russell’s original research, which included online education, results indicated that research has shown even lower retention rates in online students than in students attending classes in the traditional college setting. The high dropout rate among distance education students was not addressed in Russell’s meta-analysis, and Phipps and Merisotis found no suitable explanation in the research. They postulated that the decreased retention rate documented within distance education studies skews achievement data by excluding the dropouts. Diaz (2002) found a high drop rate for online students compared to traditional classroom-based students in an online health education course at Nova Southeastern. Other studies have supported the theory that retention of online students is far below that 33 of the traditional campus students. In 2002, Carr, reporting for The Chronicle of Higher Education, noted that online courses routinely lose 50 % of students who originally enrolled, as compared to a retention rate of 70-75% of traditional face-to-face students. Carr reported dropout rates of up to 75% in online courses as a likely indicator of the difficultly faced in retaining distance education students who do not routinely meet with faculty. The data have not been refuted. As community colleges began utilizing distance education, retention rates were reported as higher than traditional students (Nash, 1984). However, the California Community College System report for Fall 2008 courses showed inconsistent retention results for distance education learners, varying by the type of course. Results indicated equivalent retention rates for online instruction compared to traditional coursework in the majority of courses. Lower retention rates were indicated in online engineering, social sciences, and mathematics courses as compared to traditional classroom instructional models (California Community Colleges Chancellor's Office, 2009). Due to the limited number of vocational/technical or occupational courses taught in the online mode, there was little data on student retention. In 1997, Hogan studied technical course and program completion of students in distance and traditional vocational education and found that course completion rates were higher for distance education students. However, program completion rates were higher for traditional students than for students enrolled in distance education (Hogan, 1997). In summary, studies of retention have focused primarily on student characteristics while acknowledging that postsecondary retention rates vary according to a variety of factors. Research showed mixed results concerning the retention rate of online students, 34 though quantitative data leans heavily toward a lower course retention rate in online students. Data from 4-year universities have shown lower retention rates for online students than for traditional face-to-face students, while community colleges have shown inconsistent results. Data from vocational-technical education has been limited, but course retention rates are higher for online students, while program retention rates are lower. No significant research factor affecting retention has been isolated between students in online baccalaureate completion programs and students participating in traditional classroom-based settings. Summary Research studies have been conducted analyzing student retention in higher education, transfer and retention of students from community colleges to universities, the impact of distance education, and student achievement and retention factors related to distance education. However, no comparative research was identified that compared the achievement and retention of students participating in an occupationally oriented transfer program utilizing both online education and traditional classroom-based instruction. Chapter Three addresses the topics of research design, hypotheses, and research questions. Additionally, population and sample, data collection, and data analysis are discussed. 35 CHAPTER THREE METHODOLOGY The purpose of this study was to determine if there is a significant difference between course grades of students enrolled in online Technology Administration courses and their traditional classroom-based counterparts. The study also examined if there is a significant difference between course retention and program retention of students enrolled in online Technology Administration courses and their traditional classroombased counterparts. The methodology employed to test the research hypotheses is presented in this chapter. The chapter is organized into the following sections: research design, hypotheses and research questions, population and sample, data collection, data analysis, and summary. Research Design A quantitative, quasi-experimental research design was selected to study grades, course retention, and program retention in students enrolled in the Technology Administration program. The design was chosen as a means to determine if significant differences occur between online and face-to-face students by examining numerical scores from all participants enrolled, and retention rates in both courses and programs in the Technology Administration program. Hypotheses and Research Questions This study focused on three research questions with accompanying hypotheses. The research questions and hypotheses guiding the study follow. 36 Research Question 1: Is there is a statistically significant difference between students’ grades in online classes and traditional face-to-face classes? H1: There is a statistically significant difference in course grades of students participating in online courses and students enrolled in a traditional classroom setting at the 0.05 level of significance. Research Question 2: Is there a statistically significant difference between course retention rate of students in online classes and traditional face-to-face classes? H2: There is a statistically significant difference in student course retention between students participating in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. Research Question 3: Is there a statistically significant difference in program retention between students who entered the program in online classes and students who entered the program in traditional face-to-face classes? H3: There is a statistically significant difference in program retention between students who begin the Technology Administration program in online courses and students who begin in face-to-face courses at the 0.05 level of significance. Population and Sample The two populations selected were students enrolled in online and face-to-face courses. The sample included students enrolled in Technology Administration courses. Student enrollment was analyzed for all Technology Administration courses in the program sequence to determine the number of samples available in online and face-toface classes. The course enrollment data for the sample are outlined in Table E1. The subsample of the data utilized for the study is presented in Table 1. 37 Table 1 Technology Administration Enrollment Data Year Instructor TA 300 TA310 FTF OL FTF OL Spring 02 A 14 25 Fall 02 A 11 20 9 26 Spring 03 A 29 38 Fall 03 A 20 29 13 34 Spring 04 B 32 25 Fall 04 B 18 32 10 28 Spring 05 B 23 31 Fall 05 B 15 28 11 28 Spring 06 B 13 30 Fall 06 B 14 24 24 32 Spring 07 B 15 33 Fall 07 B 16 23 27 30 Spring 08 B 22 3529 TOTAL 94 156 242 395 Note: TA 300 Evolution and Development of Technology, TA 310 Technology and Society The subsample for hypothesis 1 and hypothesis 2 included all students enrolled in two entry-level courses required for completion of the Technology Administration program: TA 300 Evolution and Development of Technology, and TA 310 Society and 38 Technology. The university offered the courses in online and face-to-face formats during the period of the study. Two instructors, identified as A and B, were involved with teaching the online and face-to-face courses. Two courses were selected that met the following criteria: (a) the same faculty member taught both courses, (b) the courses were offered over the period of the study consistently in online and face-to-face instruction, and (c) the syllabi for simultaneous online and face-to-face sections were identical. For hypothesis 3, data included records of all students enrolled in TA 300 Evolution and Development of Technology for the Fall semesters of 2002, 2003, 2004, 2005, and 2006. The course was selected for inclusion in the study based on the following criteria: (a) student enrollment in the course was the result of declaration of the Technology Administration program major and (b) parameters of the study allowed students 2 or more years to complete the program requirements. For the purpose of the study, all student names were removed. Data Collection An Institutional Review Board (IRB) form was prepared for Washburn University approval prior to data collection. The study was designated as an exempt study. The Washburn University IRB form is provided in Appendix A. Approval of the IRB was transmitted by e-mail. A copy is located in Appendix B. In addition, an IRB was submitted to Baker University. The form is located in Appendix C. The Baker IRB approval letter is located in Appendix D. Washburn University had two types of data collection systems in place during the period identified for the study, Spring 2002 through Spring 2008. The AS 400 data collection system generated paper reports for 2002 and 2003. The researcher was allowed 39 access to paper records for 2002 and 2003. Enrollment results for all technology administration sections for 2002-2003 were entered manually into an Excel spreadsheet. In 2004, the University transferred to the Banner electronic student data management system. All records since 2004 were archived electronically and were retrieved utilizing the following filters for data specific to students enrolled in the identified Technology Administration courses: TA course designation and specific coding for year and semester to be analyzed (01 = Spring semester, 03 = Fall semester, 200X for specified year). Results retrieved under the Banner system were saved as an Excel spreadsheet by the researcher. The course enrollment data for the sample are presented in Tables E1 and E2. Student transcripts and records were analyzed to determine program completion or continued enrollment in the program for program retention analysis. Documents examined included paper student advising files located within the Technology Administration department and specific student records housed within the Banner reporting system. Technology Administration course TA 300 was selected based on the following: (a) It is a required entry course only for Technology Administration majors, and (b) TA 310 is a dual enrollment course for business department majors. Data Analysis Data analysis for all hypothesis testing was conducted utilizing SPSS software version 16.0. The software system provided automated analysis of the statistical measures. To address Research Question 1, a two-factor analysis of variance was used to analyze for a potential difference in delivery method (online and face-to-face), potential 40 difference in instructor (instructors A and B), and potential interaction between the two factors. When the analysis of variance reveals a difference between the levels of any factor, Salkind (2008) referred to this as the main effect. This analysis produces three F statistics: to determine if a difference in grades of online students as compared to their classroom based counterparts was affected by a main effect for delivery, a main effect for instructor, and for interaction between instructor and delivery. Chi-square testing was selected to address research questions 2 and 3. The rationale for selecting chi-square testing was to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Salkind, 2008). If the obtained chi-square value is greater than the critical value, it indicates there is sufficient evidence to believe the research hypothesis is true. For research question 2, a chi-square test for differences between proportions analyzed course retention of online and face-to-face students at the end of semester. For Research Question 3, a chi-square test for differences between proportions analyzed program retention comparing students who began the program in the online section of TA 300 to the students who began in the face-to-face section. Limitations of the Study Roberts (2004) defined the limitations of the study as those features of the study that may affect the results of the study or the ability to generalize the results. The limitations of this study included (a) potential for data entry error, (b) curriculum modifications not reflected in the syllabi made by instructors over the period of the study, (c) behavior of the instructors during delivery in the two different formats, and (d) 41 rationale of students for selecting one course delivery method over another. These may affect the generalizability of this study to other populations. Summary This chapter described the research design, population and sample, hypotheses, data collection, and analysis used in this research study. Statistical analysis using twoway analysis of variance and chi-square were used to determine if there are significant statistical differences in the course grades, course retention, and program retention of students enrolled in online classes as compared to their face-to face counterparts. The results of this study are presented in Chapter Four. 42 CHAPTER FOUR RESULTS The study had three main purposes. The first purpose was to determine if there was a difference in grades between students in online classes and students in traditional face-to-face classes in the Technology Administration program. In addition, the study was designed to examine the difference in course retention rates of students in the online classes as compared to the face-to-face classes. The third part of the study was designed to examine program retention rates of students who began the program in online classes and students who began the program in traditional face-to-face classes. This chapter begins with the descriptive statistics for the sample: gender, age, grades by gender, and course selection of students in online or face-to-face courses by gender. From the three research questions, research hypotheses were developed, and the results of statistical analyses used to test each hypothesis are presented. Descriptive Statistics Demographic data for the sample was collected from the student data system for 2002 through 2009. The descriptive statistics presented below include gender (n = 884), age (n = 880), grades by gender (n = 884) and course selection online or face-to-face by gender (n = 884). Table 2 describes the cross-tabulation of the frequencies for gender and of the sample selected for the study. The mean age for the sample tested was 31.06 years, with a standard deviation of 9.46 years. The age range of the sample was from 18 to 66 years. One participant did not report gender. Age was not available for three participants. 43 Table 2 Participant Age Group by Gender (n=880) Age Range By Years < 20 20-29 30-39 40-49 50-59 60-69 Female 0 198 121 62 29 3 Male 5 281 104 53 19 5 Note: Gender not reported for one participant; Age not reported for four participants Females = 413 Males = 467 Table 3 presents the frequency of course grades by gender and total number of students receiving each grade. Grades were distributed across the continuum, with slightly more females than males receiving A’s, more males than females receiving B’s, C’s and F’s, and an equal distribution of students receiving D’s. More males withdrew from classes than did females. 44 Table 3 Average Grades by Gender (n=884) Grades Female Male Total A 245 208 453 B 53 79 132 C 32 70 102 D 17 16 33 F 37 55 92 No Credit 1 0 1 Passing 0 1 1 Withdraw 25 42 67 Withdraw Failing 3 0 3 Total 413 471 884 Note: Gender not reported for one participant Table 4 presents the course selection patterns of male and female students. Overall, more students selected online courses than face-to-face courses. Females and males enrolled in online courses in equal numbers; however, proportionally more females (68.7%) chose the online instructional format instead of face-to-face compared with males (60.1%). 45 Table 4 Course Selection by Gender (n=884) Course Type Female Male Total Face-to-face 129 184 313 Online 284 287 571 Total 413 471 884 Note: Gender not reported for one participant Hypothesis Testing H1: There is a statistically significant difference in the course grades of students enrolled in online classes and students enrolled in a traditional classroom setting at the 0.05 level of significance. The sample consisted of 815 students enrolled in online and face-to-face Technology Administration courses at Washburn University. A two-factor analysis of variance was used to analyze for the potential difference in course grades due to delivery method (online and face-to-face), the potential difference due to instructor (instructors A and B), and the potential interaction between the two independent variables. Mean and standard deviation for grades were calculated by delivery type and instructor. Table 5 presents the descriptive statistics. The mean of grades by delivery showed no significant difference between online and face-to-face instruction. Additionally, no significant difference in mean grade was evident when analyzed by instructor. 46 Table 5 Means and Standard Deviations by Course Type and Instructor Course type Instructor Mean Standard Deviation n Face-to-face A 3.0690` 1.41247 29 B 2.9586 1.39073 266 Total 2.9695 1.39084 295 Online A 2.9024 1.52979 41 B 3.0271 1.35579 479 Total 3.0271 1.36911 520 Total A 2.9714 1.47414 70 B 3.0027 1.36783 745 Total 3.000 1.37635 815 The results of the two-factor ANOVA, presented in Table 6, indicated there was no statistically significant difference in grades due to delivery method (F = 0.078, p = 0.780, df = 1, 811). This test was specific for hypothesis 1. In addition, there was no statistically significant difference in grades due to instructor (F = 0.002, p = .967, df = 1, 811), and no significant interaction between the two factors (F = 0.449, p = 0.503, df = 1, 811). The research hypothesis was not supported. 47 Table 6 Two-Factor Analysis of Variance (ANOVA) of Delivery by Instructor df F p Delivery 1 0.148 0.780 Instructor 1 0.003 0.967 Delivery*Instructor 1 0.449 0.503 Error 811 Total 815 H2: There is a statistically significant difference in student course retention between students enrolled in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. The sample consisted of 885 students enrolled in TA 300 and TA 320 online and face-to-face courses. The hypothesis testing began with the analysis of the contingency data presented in Table 7. The data are organized with course selection (online or face-to-face) as the row variable and retention in the course as the column variable. Data were included in the retained column if a final grade was reported for participant. Participants who were coded as withdraw or withdraw failing were labeled as not retained. Chi-square analysis was selected to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Roberts, 2004). The result of the chi square testing (X2 = 2.524, p = .112, df = 1, 884) indicated there was no statistically significant difference between retention of students enrolled in online courses compared to students enrolled in face-to-face courses in the TA program. Additional results indicated that 93.92% (294/313) of the online students were retained, 48 compared to 90.89% (519/571) of the face-to-face students. The research hypothesis was not supported. Table 7 Course retention of online and face-to-face TA students Retained Not retained Total Face-to-face students 294 19 313 Online students 519 52 571 Total 813 71 884 H3: There is a statistically significant difference in program retention between students who begin the Technology Administration program in online courses and students who begin in face-to-face courses at the 0.05 level of significance. The sample consisted of 249 students enrolled in TA 300 in the online and face-to-face courses from Fall 2002 through Fall 2008. The hypothesis testing began with the analysis of the contingency data located in Table 8. The table is organized with course selection (online or face-to-face) as the row variable and program retention as the column variable. Data were included in the retention column if students had successfully met requirements for a Bachelors of Applied Science in Technology Administration or if they were enrolled in the program in Spring 2009. Data were included in the non-retained column if students had not fulfilled degree requirements and they were not enrolled in Spring 2009. Chisquare analysis was selected to observe whether a specific distribution of frequencies is the same as if it were to occur by chance (Roberts, 2004). 49 The result of the chi-square testing (X2 = .132, p = .717, df = 1, 249) indicated there was no statistically significant difference between the program retention rate of students who began the TA program in the online courses compared to the students who began the program in the face-to-face courses. Additional results showed that 91.57% (163/178) of students who began in online courses were retained compared to 92.96% (66/71) of students who began the TA program in face-to-face courses. The research hypothesis was not supported. Table 8 Program retention of online and face-to-face TA students Retained Not retained Total Face-to-face 66 5 71 Online 163 15 178 Total 229 20 249 Summary In this chapter, an introduction provided a summary of the analysis and statistical testing and in the order in which it was presented. This was followed by descriptive statistics of the sample, including age range of participants, grades by gender, and course selection by gender. Results from testing of H1 revealed no significant difference between course grades of online students and students enrolled in traditional face-to-face classes. Chisquare testing was utilized for testing of H2. Results indicated there was no significant 50 difference in course retention of students enrolled in online courses and students enrolled in traditional face-to-face courses. H3 was also tested utilizing chi-square testing. The results indicated no significant difference in program retention of students who began the TA program in online courses and students who began in traditional face-to-face courses. Chapter Five provides a summary of the study, discussion of the findings in relationship to the literature, implications for practice, recommendations for further research, and conclusions. 51 CHAPTER FIVE INTERPRETATION AND RECOMMENDATIONS Introduction In the preceding chapter, the results of the analysis were reported. Chapter Five consists of the summary of the study, an overview of the problem, purpose statement and research questions, review of the methodology, major findings, and findings related to the literature. Chapter Five also contains implications for further action and recommendations for further research. The purpose of the latter sections is to expand on the research into distance education, including implications for expansion of course and program delivery and future research. Finally, a summary is offered to capture the scope and substance of what has been offered in the research. Study Summary The online delivery of course content in higher education has increased dramatically in the past decade. Allen and Seaman (2007a) reported that almost 3.5 million students participated in at least one online course during the Fall 2006 term, a nearly 10% increase over the number reported in the previous year. They also reported a 9.7% increase in online enrollment compared to the 1.5% growth in overall higher education. As online delivery has grown, so has criticism of its efficacy. Online delivery of education has become an important strategy for the institution that is the setting of this study. The purpose of this study was three-fold. The first purpose of the study was to determine if there was a significant difference between the course grades of students participating in TA online courses and their traditional classroombased counterparts. The second purpose of the study was to determine if there was a 52 significant difference between course retention of students participating in TA online courses and their traditional classroom-based counterparts. A third purpose of the study was to determine if there was a significant difference between program retention of students who began the TA program in online courses and those who began the program enrolled in traditional face-to-face courses. The study was designed to expand the knowledge base concerning online education and its efficacy in providing baccalaureate degree completion opportunities. The research design was a quantitative study to compare course grades, course retention, and program retention of students enrolled in the online and traditional face-toface TA program at Washburn University. Archival data from the student system at Washburn University was utilized to compare online and traditional face-to-face students. In order to answer Research Question 1, a sample of students enrolled in TA 300 and TA 310 online and traditional face-to-face courses was analyzed. The sample included students entering the program in the Fall semesters of 2002, 2003, 2004, 2005, and 2006. Two instructors were responsible for concurrent instruction of both the online and faceto-face classes for the period analyzed. A two-factor analysis of variance was used to analyze for a potential difference in the dependent variable, course grades, due to delivery method (online and face-to-face), the instructor (instructors A and B), and the potential interaction between the two independent variables (Research Question 1). A chi-square test for differences among proportions was used to analyze both course and program retention (Research Questions 2 and 3). For Research Question 2, archived data from the Washburn University student system was analyzed for students enrolled in TA 300 and TA 310. Additional variables identified for this sample included 53 course selection and instructor (A or B). For Research Question 3, archived data from the Washburn University system was used, which identified students with declared Technology Administration majors who began the TA program enrolled in online and face-to-face courses. A single gatekeeper course (TA 300) was identified for testing. Two instructors (A and B) were responsible for instruction during the testing period. A two-factor ANOVA was utilized to test H1: There is a statistically significant difference in course grades of students participating in online courses and students enrolled in a traditional classroom setting at the 0.05 level of significance. ANOVA testing was utilized to account for the two delivery methods and two instructors involved for the period of the study. The results of the test indicated there was no statistically significant difference in grades due to delivery method. The results of the testing also indicated no statistically significant difference in grades due to instructor and no interaction between the two independent variables. The research hypothesis was not supported. To test the next hypothesis, chi-square testing was utilized. H2: There is a statistically significant difference in student course retention between students participating in online courses and students enrolled in face-to-face courses at the 0.05 level of significance. The result of the chi-square testing indicated there was no statistically significant difference in course retention of students enrolled in online courses and students enrolled in face-to-face courses in the TA program. The research hypothesis was not supported. To test the final hypothesis, chi-square testing was also used. H3: There is a statistically significant difference in program retention between students who begin the 54 Technology Administration program in online courses and students who begin in face-toface courses at the 0.05 level of significance. The result of the chi-square testing indicated there was no statistically significant difference in the program retention rate of students who began the TA program in the online courses and students who began the program in the face-to-face courses. The research hypothesis was not supported. Testing found that course retention was high in both formats, leading to interpretation that higher results may be due to the age of participants or prior degree completion. The results found no significant difference in grades, course, or program retention for students in online TA courses and students enrolled in traditional face-to-face instruction. The implication of these results compared to current literature is discussed in the next section. Findings Related to the Literature Online education has become a strategy for higher education to provide instruction to students limited by distance or time, or who, for other reasons, do not wish to attend traditional classroom-based university classes. Additionally, online education allows higher education institutions to expand their geographic base. Institutions have utilized distance education for over a century to provide instruction, but it was only within the last two decades that instruction over the Internet had replaced correspondence, television, and video courses as the method of choice for delivery (Russell, 1999). Utilizing grades as a measure of achievement, meta-analyses conducted by Russell (1999), Shachar and Neumann (2003), and Machtmes and Asher (2002) found no significant difference in grades of online students and traditional classroom-based 55 students. These analyses utilized multiple studies of course information, comparing grades of online students and traditional face-to-face students, primarily utilizing t tests as the preferred methodology. The results of previous research were supported by the present study. Additionally, this study went further, analyzing data over more than one semester, controlling for the effect of different instructors. These results were contrary to the conclusion reached by Phipps and Merisotis (1999). The second purpose of the study was to determine if a significant difference existed between the course retention of students enrolled in online TA courses and students enrolled in face-to-face courses. Meta-analyses conducted by Phipps and Merisotis (1999) and Nora and Snyder (2009) concluded a much lower course retention rate in online students as compared to their face-to-face counterparts. The previous metaanalyses examined retention of online students and traditional face-to-face students in distinct courses, utilizing t tests as the primary methodology. The chosen method of t tests was used instead of the chi square testing due to the limitations of the studies to one course taught by one instructor, limited to one semester or cycle. Carr (2002) reported in The Chronicle of Higher Education that retention of online students was 50% less than that of traditional face-to-face students. Carr’s results were based on the examination of longitudinal retention data from universities as reported to the United States Department of Education. The results of the present study found no significant difference in the course retention rates. These results are supported by the findings of Carmel and Gold (2007) in which they reported no significant difference in course retention rates of online students compared to traditional face-to-face students in their analysis of students in multiple 56 courses in disciplines across a 4-year university. The present study expanded those results, examining course data in the same discipline over a 6-year period and controlling for delivery by two separate instructors. Research into program completion rates of AAS students has been conducted primarily in traditional university settings, including Townsend’s (2002) studies at the University of Missouri-Columbia. Townsend’s results showed a lower baccalaureate completion rate for students entering with an AAS than students who transferred to 4- year universities with an AA degree. Studies by Hogan (1997) of vocational-education programs also found a lower program completion rate for online students compared to students in traditional delivery vocational education programs. Analysis of the data in the current study showed no significant difference in program completion rate of students who began in online TA courses as compared to students who began the program in faceto-face courses. Conclusions The use of distance education for postsecondary instruction, primarily in the form of the Internet, has both changed and challenged the views of traditional university-based instruction. Multiple studies have been designed in an effort to examine whether online students have the same level of academic achievement as their traditional higher education peers. The present study agrees with the research indicating there is no statistically significant difference in the grades of online students and their face-to-face counterparts. In addition, with student retention an issue for all postsecondary institutions, the data from previous studies indicated a lower retention rate for online students than for their traditional face-to-face classmates. The current study contradicted 57 those arguments. In the following sections, implications for action, recommendations for research, and concluding remarks are addressed. Implications for Action As postsecondary institutions move into the 21st century, many have examined issues of student recruitment and retention in an effort to meet the demands of both their students and their communities. The majority of postsecondary institutions have initiated online education as a strategy to recruit students from beyond their traditional geographic areas. This study supported existing research utilizing grades as a measure of achievement and should alleviate doubt that online students are shortchanged in their education. The transition of existing face-to-face to courses to an online delivery model can be accomplished without sacrificing achievement of course and program goals. The study also examined course and program retention data, finding no significant differences between online and traditional students in the TA program. The findings of this study support the expansion of additional online courses and programs within the School of Applied Studies. Finally, this study can provide the basis for further action, including analyzing other programs and courses offered in the online format by the University. The analysis of other programs offered in an online delivery model would enhance further development of online courses and programs. Recommendations for Future Research Distance education delivery has expanded dramatically with the use of the Internet for online instruction. The present study could be continued in future years to measure the effects of specific curriculum delivery models and changes made to online 58 delivery platforms. In addition, the study could be expanded to include specific characteristics of student retention named in the literature, such as examining whether the age and entering GPA of students provides any insight into course and program retention. The study could also be expanded to include other universities with similar baccalaureate-degree completion programs and other disciplines. Because the body of research is limited concerning the baccalaureate-degree completion of students who begin their postsecondary education in career-oriented instruction, there is value in continuing to study baccalaureate completion rates, both in an online format and in more traditionally based settings. Concluding Remarks The current study examined a Technology Administration program that has been offered in both online and face-to-face format, utilizing data from Fall 2002 through Spring 2008. The TA program was developed to allow students who had completed an occupationally oriented AAS degree to complete a bachelor’s degree program. Three hypotheses were tested in this study, examining course grades, course retention, and program retention of students enrolled in online and face-to-face courses in Technology Administration. No significant difference was found for the three hypotheses. These results form a strong foundation for expanding online courses and programs at Washburn University. By addressing two of the major concerns of educators, achievement and retention, the study results allow expansion of online courses and programs to benefit from data-driven decision-making. Other institutions can and should utilize data to examine existing online course and program data. 59 REFERENCES Allen, I. E., & Seaman, J. (2003). Seizing the opportunity: The quality and extent of online education in the United States, 2002 and 2003. Needham, MA: The Sloan Consortium. Allen, I. E., & Seaman, J. (2005). Growing by degrees: Online education in the United States, 2005. Needham, MA: The Sloan Consortium. Allen, I. E., & Seaman, J. (2007a). Making the grade: Online education in the United States. Needham, MA: The Sloan Consortium Allen, I. E., & Seaman, J. (2007b). Online nation: Five years of growth in online learning. Needham, MA: The Sloan Consortium. Arle, J. (2002). Rio Salado College online human anatomy. In C. Twigg, Innovations in online learning: Moving beyond no significant difference (p. 18). Troy, NY: Center for Academic Transformation. Atkins, T. (2008, May 13). Changing times bring recruiting challenges at WU. Retrieved May 15, 2008, from CJOnline Web site at http://cjonline.com/stories/ 051308/loc_278440905.shtml Berge, Z., & Huang, L. P. (2004, May). A model for sustainable student retention: A holistic perspective on the student dropout problem with special attention to elearning. American Center for the Study of Distance Education. Retrieved April 17, 2009, from DEOSNEWS Web site at http://www.ed.psu.edu/acsde/deos/deosnews/deosarchives.asp 60 Bradburn, E., Hurst, D., & Peng, S. (2001). Community college transfer rates to 4-year institutions using alternative definitions of transfer. Washington, DC: National Center for Education Statistics. Brown, B. W., & Liedholm, C. (2002, May). Can Web courses replace the classroom in principles of microeconomics? The American Economic Review, 92, 444-448. California Community Colleges Chancellor's Office. (2009, April 20). Retention rates for community colleges. Retrieved April 20, 2009, from https://misweb.cccco.edu/mis/onlinestat/ret_suc_rpt.cfm?timeout=800 Carmel, A. & Gold, S. S.. (2007). The effects of course delivery modality on student satisfaction and retention and GPA in on-site vs. hybrid courses. Retrieved September 15, 2008, from ERIC database. (Doc. No. ED496527) Carnevale, D. (2006, November 17). Company's survey suggests strong growth potential for online education. The Chronicle of Higher Education , p. 35. Carr, S. (2000, February 11). As distance education comes of age, the challenge is keeping the students. The Chronicle of Higher Education , pp. 1-5. Cohen, A., & Brawer, F. (1996). The American community college. San Francisco: Jossey-Bass. Diaz, D. (2002, May-June). Online drop rates revisited. Retrieved April 8, 2008, from The Technology Source Archives Web site at http://www.technologysource.org/article/online_drop_rates-revisited/ Dougherty, K. J. (1992). Community colleges and baccalaureate attainment. The Journal of Higher Education, 63, 188-214. 61 Ebel, R., & Frisbie, D. (1991). Essentials of educational measurement. Prentice Hall: Englewood Cliffs, NJ. The Harvard guide. (2004). Retrieved May 20, 2008, from http://www.news.harvard.edu/guide Hogan, R. (1997, July). Analysis of student success in distance learning courses compared to traditional courses. Paper presented at Sixth Annual Conference on Multimedia in Education and Industry, Chattanoga, TN. Jacobs, J., & Grubb, W. N. (2003). The federal role in vocational education. New York: Community College Research Center. Joliet Junior College history. (2008). Retrieved May 20, 2008, from Joliet Junior College Web site at http://www.jjc.edu/campus_info/history/ Kansas Board of Regents. (2002-2003). Degree and program inventory. Retrieved May 14, 2008, from http://www.kansasregents.org Keeley, E. J., & House, J. D. (1993). Transfer shock revisited: A longitudinal study of transfer academic performance. Paper presented at the 33rd Annual Forum of the Association for Institutional Research, Chicago, IL. Knowles, M. S. (1994). A history of the adult education movement in the United States. Melbourne, FL: Krieger. Laanan, F. (2003). Degree aspirations of two-year students. Community College Journal of Research and Practice, 27, 495-518. Lynch, T. (2002). LSU expands distance learning program through online learning solution. T H E Journal (Technological Horizons in Education), 29(6), 47. 62 Machtmes, K., & Asher, J. W. (2000). A meta-analysis of the effectiveness of telecourses in distance education. The American Journal of Distance Education, 14(1), 27-41. Gilman, E. W., Lowe, J., McHenry, R., & Pease, R. (Eds.). (1998). Merriam-Webster’s collegiate dictionary. Springfield, MA: Merriam. Nash, R. (1984, Winter). Course completion rates among distance learners: Identifying possible methods to improve retention. Retrieved April 19, 2009, from Online Journal of Distance Education Web site at http://www.westga.edu/~distance/ojdla/winter84/nash84.htm National Center for Education Statistics. (2000). Distance education statistics 1999-2000. Retrieved March 13, 2008, from at http://nces.ed.gov/das/library/tables_listing National Center for Education Statistics. (2001). Percentage of undergraduates who took any distance education courses in 1999-2000
Answer the question using only information from the provided context block.
What are some of the benefits of online education? |