system_instruction
stringlengths 29
665
| user_request
stringlengths 15
889
| context_document
stringlengths 561
153k
| full_prompt
stringlengths 74
153k
|
---|---|---|---|
Base your entire response on the document I gave you. I need to know the absolute basic information about what is being said here. | What are some effects of overexposure to glucocorticoids? | 5756Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Adult & Aging Brain 8Adult & Aging Brain8
memory, which requires a high degree
of synaptic plasticity. The loss of thin
dendritic spines could impair neuro-
nal communication and contribute
to cognitive decline. So far, direct
evidence of their role in cognitive
decline is lacking, and more studies
are needed.
Finally, the formation of new
neurons also declines with age.
Although neurogenesis was once
believed to halt after birth, we now
know of two brain regions that con-
tinue to add new neurons through-
out life: the olfactory bulbs and the
dentate gyrus of the hippocampus.
Studies suggest that the rate of neu-
rogenesis plummets with age in mice,
but recent human studies suggest a
more modest decline. It is not yet
clear whether neurogenesis apprecia-
bly affects cognition in the aging hu-
man brain, but mouse studies indicate
that strategies that boost neurogenesis
can enhance cognitive function.
Chemical Changes
The amount of neurotransmit-
ters and the number of their recep-
tors might also decline with age.
Several studies have reported that
less dopamine is synthesized in the
aged brain, and there are fewer re-
ceptors to bind the neurotransmitter.
Less robust evidence indicates that
the amount of serotonin might also
decline with age.
WHY DOES THE BRAIN AGE?
From cortical thinning to the
loss of dendritic spines, you’ve
seen how the brain ages. But what
causes these changes? Many different
theories have been advanced to
explain why neurons, and cells in
general, age. One possibility is that
changes in gene expression play a role.
Researchers have found that genes
important for synaptic plasticity are
expressed less in the brains of older
people than in the brains of younger
adults. The underexpressed genes also
showed more signs of damage.
Oxidative Stress
and DNA Damage
DNA damage that accumulates
over a lifetime could contribute
to aging processes throughout the
brain and body, and DNA damage
due to oxidative stress has received a
great deal of attention. Every cell in
your body contains organelles called
mitochondria, which function a bit
like cellular power plants, carrying
out chemical reactions that provide
energy for cell use. Some of these
metabolic reactions produce harmful
byproducts called free radicals, highly
reactive molecules which, if left un-
checked, can destroy fats and proteins
vital to normal cell function and can
damage DNA as well.
Your body has natural defense
mechanisms to neutralize free radi-
cals. Unfortunately, these mechanisms
decline with age, leaving aging tissues
more vulnerable to oxidative damage
by the free radicals. Studies of brain
cells have shown that damage to their
mitochondrial DNA accumulates with
age. In addition, the brains of people
with mild cognitive impairment and
Alzheimer’s disease show more signs
of oxidative damage than the brains of
healthy people. Studies in rodents also
link increased oxidative damage to
memory impairments.
Your brain is one of the most
metabolically active organs, demand-
ing around 20 percent of the body’s
fuel. Its enormous energy require-
ments might make the brain even
more vulnerable than other tissues
to the metabolic changes that occur
in aging. While the brain’s energy
demands remain high, its energy
supply can no longer keep pace; the
brain’s ability to take up and use glu-
cose diminishes and mitochondrial
metabolism declines.
Immune Dysfunction
Immune dysfunction often occurs
in conjunction with the metabolic
changes seen in aging. Microglia, the
brain’s resident immune cells, per-
form many important jobs: defending
against pathogens, cleaning up cellular
debris, and helping maintain and re-
model synapses. These inflammatory
responses are protective, but a pro-
longed inflammatory state is harmful
to brain health. Microglia become
more reactive with age, increasing the
inflammatory response in the brain
while also damping production of
helpful anti-inflammatory molecules.
Mouse studies suggest that excessive
microglial activity also contributes to
cognitive impairments.
Impaired Protein Recycling
We know that excessive buildup
of abnormal proteins in the brain
contributes to age-related neurode-
generative diseases like Alzheimer’s
and Parkinson’s. Buildup of proteins
and other cell components can also
contribute to cellular degeneration
in the healthy brain. Cells normally
break down and recycle damaged
proteins and molecules, using a pro-
cess that is usually efficient but not
perfect. Over time, damaged mole-
cules can build up in cells and prevent
them from functioning normally.
Because neurons in the brain are not
replaced as often as cells in other
parts of the body (for example, bone
marrow, intestinal lining, hair folli-
cles), brain cells might be even more
vulnerable to this buildup of damaged
molecules. Also, the cellular ma-
chinery involved in breakdown and
recycling processes degrades with age,
reducing the efficiency of the “waste
removal” systems.
Finally, remember that changes
in the aging brain occur within the
context of other changes throughout
the body. Researchers speculate that
worsening cardiovascular health, for
example, could contribute to, or even
drive, many changes seen in the
aging brain.
HEALTHY AGING
We have learned how
the brain changes with
age and why these changes can occur.
Now let’s turn our attention to a
growing field in neuroscience that
explores ways to slow these changes
and preserve healthy brain function.
Diet and Exercise
Strong evidence now suggests
that habits and choices that keep your
body healthy also benefit your mind.
Poor cardiovascular health puts a
person at increased risk of age-related
cognitive impairment. Diets rich in
vegetables, fruits, and whole grains,
and low in meat and dairy products,
can reduce cardiovascular risk factors
linked to cognitive impairment, such
as high blood pressure and high levels
of LDL cholesterol. Indeed, observa-
tional studies have found that people
who follow plant-rich diets such as
the Mediterranean diet or Dietary
Approaches to Stop Hypertension
(DASH) are less likely to develop
cognitive decline and dementia.
Specific nutrients have been linked
to improved cognitive performance
and lower rates of dementia. Anti-
oxidants, such as vitamins C and E,
flavonoids, and omega-3 fatty acids
have received considerable attention,
with observational studies showing
that high dietary intake of these
compounds is beneficial. However,
the results of lifestyle intervention
studies using supplements have been
more mixed. Finally, caloric restriction
— substantially reducing the number
of calories eaten without leading to
malnutrition — has been linked to
Many different theories have been
advanced to explain why neurons,
and cells in general, age. Synapses begin to weaken as a person ages, which can contribute to normal
cognitive decline.
Brain Facts society for neuroscience|
Adult & Aging Brain85958
improved cognitive health as well as a
longer lifespan.
Growing evidence shows that
aerobic exercise can improve cognitive
function and offset some of the de-
clines seen in aging. Numerous studies
have found that people who engage
in regular physical activity show
improved learning, improved mem-
ory, and a reduced risk of developing
dementia. Physical activity might even
slow the progression of Alzheimer’s
disease and dementia, and higher levels
of physical activity have been linked
to improvements in some markers of
structural brain health, such as reduced
cortical thinning and less shrinkage in
the hippocampus.
Exercise exerts its neuroprotec-
tive effects in the brain by improving
neuroplasticity — the brain’s ability to
form and reorganize connections be-
tween neurons in response to changes
in behavior and environment. Scien-
tists also believe that exercise increases
neurogenesis (the formation of new
nerve cells) which, in turn, enhances
neuroplasticity. Evidence from rodent
studies confirms that exercise increases
neurogenesis: Older mice allowed to
run on a wheel have higher rates of
neurogenesis in the hippocampus than
sedentary mice, and they perform bet-
ter on learning and memory tests. Ex-
ercise can also improve blood flow and
increase production of neurotrophic
factors that support new neurons and
synapses. For humans, starting exercise
later in life can be beneficial, but the
studies suggest that adopting an exer-
cise program earlier in life could yield
even more neuroprotective benefits.
Mental Stimulation
and Social Networks
Mental stimulation and large so-
cial networks can also improve cogni-
tive function in aging. In lab studies,
mice housed in cognitively stimulat-
ing environments with many oppor-
tunities for social interaction perform
better on learning and memory tests
as they age compared to mice housed
in standard cages. Much like physical
exercise, cognitive stimulation appears
to enhance neuroplasticity by increas-
ing neurogenesis and boosting levels
of important neurotrophic factors.
People who perform cognitive-
ly-demanding work or engage in
stimulating activities such as reading,
solving puzzles, or playing a musical
instrument have lower rates of cog-
nitive decline with aging. An active
social life has also been shown to be
beneficial for cognition as we age.
Neuroscientists have learned a
lot about the aging brain — how it
changes, why it changes, and how
to maintain healthy cognitive func-
tioning as we age. Even so, many
questions remain. Answers to those
questions could identify new strate-
gies for protecting the brain, not only
in our later years, but throughout
our lives.
Exercise has been shown to increase neurogenesis in the adult brain, and can slow the
cognitive decline associated with aging.
iStock.com/artyme83.
Have you ever considered the
ups and downs that occur
during your day? Speaking
literally, you are up and awake during
the day and lying down sleeping at
night. Speaking figuratively, ups and
downs could mean that you experi-
ence periods of elevated alertness and
arousal compared with your mood when
you are tired or relaxed. Asleep, awake,
aroused, and relaxed are different brain
states, meaning that the brain’s activity
is different during each of these peri-
ods. Scientists have looked deep inside
the brain to understand what sleep is
and how rest differs from being alert.
This research is especially important
for people like doctors, pilots, and shift
workers who sometimes must focus and
make important decisions with very little
sleep. Research on brain states can also
help people who have disorders of sleep,
attention, and learning.
SLEEP
How many hours of sleep do you
get every night? Most people spend
one-third of their lives asleep. While
that might appear to be a lot of time
spent doing nothing, our brains are
active while we rest each night. The
activity in our brains during sleep is
important for brain health and for
solidifying memories.
Most people feel tired and un-
able to focus if they don’t get enough
sleep. In some cases, too little sleep
can impair a person’s driving as much
as drinking alcohol. The long-term ef-
fects of lacking sleep also involve many
health risks. Several studies in humans
have revealed that sleep-deprived
people are at increased risk for a wide
range of health issues including diabe-
tes, stress, obesity, high blood pres-
sure, anxiety, cognitive impairment,
and depression.
CHAPTER
Brain States
9
Brain Facts
6160Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Brain States 9Brain States9
Brain Activity During Sleep
Scientists can measure the brain’s
electrical activity using electroenceph-
alography (EEG). Electrodes attached
to the scalp detect and record the net
electrical activity of hundreds of thou-
sands of cortical nerve cells. When a
neuron is active, ions move in and out
of the cell, altering the electrical charge
across the cell membrane. An EEG de-
tects the net electrical charge produced
when neurons increase and decrease
their activity as a group, in synchrony.
The results are “brain waves” — the
cyclic rising and falling of brain activ-
ity that can be important indicators of
brain function. In sleep studies, scien-
tists now recognize two main states:
slow wave sleep (SWS) and rapid eye
movement sleep (REM).
SWS gets its name from the
high amplitude, low frequency,
brain waves in EEG recordings.
The high amplitude of slow waves
indicates that many cortical neu-
rons are switching their activity in a
synchronized way from a depolarized
(more excitable) state to a hyperpo-
larized (less excitable) state and back
again. These slow waves appear to
be important to sleep function —
the longer a person stays awake, the
more slow waves they will experience
during the SWS state. Slow waves
become less frequent the longer the
person is asleep. If awakened during
SWS, most people recall only frag-
mented thoughts, not active dreams.
Have you ever seen a cat dream-
ing — twitching its whiskers or paws
while it sleeps? Dreaming happens
mainly during REM sleep, which takes
its name from the periodic rapid eye
movements people make in this state.
Brain activity recorded during REM
looks very similar to EEGs recorded
while awake. EEG waves during REM
sleep have much lower amplitudes
than the SWS slow waves, because
neuron activity is less synchronized
— some nerve cells depolarize while
others hyperpolarize, and the “sum” of
their electrical states is less positive (or
negative) than if they acted in synchro-
ny. Paradoxically, the fast, waking-like
EEG activity during REM sleep is ac-
companied by atonia, a loss of muscle
tone causing the body to become tem-
porarily paralyzed. The only muscles
remaining active are those that enable
breathing and control eye movements.
Oddly enough, the neurons of our
motor cortex fire as rapidly during
REM sleep as they do during waking
movement — a fact that explains why
movements like a kitten’s twitching
paws can coincide with dreams.
During the night, periods of
SWS and REM sleep alternate in
90-minute cycles with 75–80 minutes
of SWS followed by 10–15 minutes
of REM sleep. This cycle repeats,
typically with deeper and longer peri-
ods of REM sleep towards morning.
To study sleep disorders, researchers
often use mice that have sleep struc-
tures qualitatively very similar to hu-
mans; however, rodents have shorter
This chart shows the brain waves of an individual being recorded by an EEG machine during a night’s sleep. As the person falls asleep,
the brain waves slow down and become larger. Throughout the night, the individual cycles though sleep stages, including REM sleep,
where brain activity is similar to wakefulness.
and more frequent sleep episodes
lasting 3–30 minutes (sometimes lon-
ger). Rodents also sleep more during
the day and are more active at night.
Compare that to human adults, who
are typically more active during the
day and have one sleep episode at
night lasting about 8 hours.
Sleep Regulation
How does the brain keep us
awake? Wakefulness is main-
tained by the brain’s arousal systems,
each regulating different aspects of the
awake state. Many arousal systems are
in the upper brainstem, where neurons
connecting with the forebrain use the
neurotransmitters acetylcholine,
norepinephrine, serotonin, and
glutamate to keep us awake. Orexin-
producing neurons, located in the
hypothalamus, send projections to the
brainstem and spinal cord, the thala-
mus and basal ganglia, as well as to the
forebrain, the amygdala, and dopa-
mine-producing neurons. In studies of
rats and monkeys, orexin appears to
exert excitatory effects on other arousal
systems. Orexins (there are two types,
both small neuropeptides) increase
metabolic rate, and their production
can be activated by insulin-induced
low blood sugar. Thus, they are
involved in energy metabolism. Given
these functions, it comes as no surprise
that orexin-producing neurons are
important for preventing a sudden
transition to sleep; their loss causes
narcolepsy, as described below. Orexin
neurons also connect to hypothalamic
neurons containing the neurotransmit-
ter histamine, which plays a role in
staying awake.
The balance of neurotransmitters
in the brain is critically important for
maintaining certain brain states. For
example, the balance of acetylcholine
and norepinephrine can affect wheth-
er we are awake (high acetylcholine
and norepinephrine) or in SWS (low
acetylcholine and norepinephrine).
During REM, norepinephrine re-
mains low while acetylcholine is high,
activating the thalamus and neocortex
enough for dreaming to occur; in
this brain state, forebrain excitation
without external sensory stimuli pro-
duces dreams. The forebrain becomes
excited by signals from the REM
sleep generator (special brainstem
neurons), leading to rapid eye move-
ments and suppression of muscle
tone — hallmark signs of REM.
During SWS, the brain systems
that keep us awake are actively sup-
pressed. This active suppression of
arousal systems is caused by the ven-
trolateral preoptic (VLPO) nucleus, a
group of nerve cells in the hypothala-
mus. Cells in the VLPO release the in-
hibitory neurotransmitters galanin and
gamma-aminobutyric acid (GABA),
which can suppress the arousal sys-
tems. Damage to the VLPO nucleus
causes irreversible insomnia.
Sleep-Wake Cycle
Two main factors drive your body
to crave sleep: the time of day or night
(circadian system) and how long you
have been awake (homeostatic system).
The homeostatic and circadian systems
are separate and act independently.
The circadian timing system is
regulated by the suprachiasmatic
nucleus, a small group of nerve cells
in the hypothalamus that functions
as a master clock. These cells express
“clock proteins,” which go through a
biochemical cycle of about 24 hours,
setting the pace for daily cycles of
activity, sleep, hormone release, and
other bodily functions. The master
clock neurons also receive input
directly from the retina of the eye.
Thus, light can reset the master clock,
adjusting it to the outside world’s
day/night cycle — this explains how
your sleep cycles can shift when you
change time zones during travel. In
addition, the suprachiasmatic nucleus
sends signals through different brain
regions, eventually contacting the
VLPO and the orexin neurons in the
lateral hypothalamus, which directly
regulate arousal.
What happens in the brain when
we don’t get enough sleep? The second
system that regulates sleepiness is the
The balance of neurotransmitters
in the brain is critically important for
maintaining certain brain states.
6362Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Brain States 9Brain States9
homeostatic system, which makes you
feel sleepy if you stay awake longer
than usual. One important sleep
factor is a chemical in the brain called
adenosine. When you stay awake for
a long time, adenosine levels in the
brain increase. The increased ade-
nosine binds to specific receptors on
nerve cells in arousal centers to slow
cellular activity and reduce arousal.
Adenosine can increase the number of
slow waves during SWS. As you get
more sleep, adenosine levels fall and
slow waves decrease in number. Caf-
feine acts as a stimulant by binding to
adenosine receptors throughout the
brain and preventing their interaction
with adenosine. As a result, in the
presence of caffeine, fewer receptors
are available for the slowing influence
of adenosine.
People often say they need to
“catch up on sleep.” But can you really
make up for lost sleep? Normally, the
homeostatic and circadian systems
act in a complementary fashion to
produce a normal 24-hour cycle of
sleep and wakefulness. Nonetheless,
activating the brain’s arousal system
can keep us awake even after a long pe-
riod of wakefulness — for example, a
late-night study session to prepare for
an important exam. In normal circum-
stances, the homeostatic system will
respond to the loss of sleep by increas-
ing the duration of ensuing sleep and
increasing the number of slow waves
during the SWS episodes. As noted
above, this rebound slow wave activity
correlates with the previous time spent
awake and is mediated by adenosine.
Sleep Disorders
The most common sleep disorder,
and the one most people are familiar
with, is insomnia. Some people with
insomnia have difficulty falling asleep
initially; others fall asleep, then awak-
en part way through the night and
can’t fall back asleep. Several common
disorders, listed below, disrupt sleep
and prevent people from getting an
adequate amount of sleep.
Daytime sleepiness (not narcolep-
sy), characterized by excessive feelings
of tiredness during the day, has many
causes including sleep apnea (see be-
low). Increased daytime sleepiness can
increase the risk of daytime accidents,
especially car accidents.
Sleep apnea occurs when the air-
way muscles of the throat relax during
sleep, to the point of collapse, closing
the airway. People with sleep apnea
have difficulty breathing and wake up
without entering the deeper stages of
SWS. This condition can cause high
blood pressure and may increase the
risk of heart attack. Treatments for
sleep apnea focus on reducing airway
collapse during sleep; simple changes
that may help include losing weight,
avoiding alcohol or sedating drugs
prior to sleep, and avoiding sleeping
on one’s back. However, most people
with sleep apnea require breathing
machines to keep their airway open.
One such device, called a continuous
positive airway pressure or “CPAP”
machine, uses a small mask that fits
over the nose to provide an airstream
under pressure during sleep. In some
cases, people need surgery to correct
their airway anatomy.
REM sleep behavior disorder
occurs when nerve pathways in the
brain that prevent muscle movement
during REM sleep do not work.
Remember that dreaming happens
during REM sleep, so imagine people
literally acting out their dreams by
getting up and moving around. This
can be very disruptive to a normal
night’s sleep. The cause of REM be-
havior disorder is unknown, but it is
more common in people with degen-
erative neural disease such as Parkin-
son’s, stroke, and types of dementia.
The disorder can be treated with
drugs for Parkinson’s or with a ben-
zodiazepine drug, clonazepam, which
enhances the effects of the inhibitory
neurotransmitter GABA.
FPO
Electroencephalography measures brain activity through sensors placed on the head. It can
record how the brain reacts to all kinds of stimuli and activities, including sleep.
Simon Fraser University.
Narcolepsy: An Example
of Sleep Disorder Research
Narcolepsy is a relatively
uncommon sleep disorder —
only 1 case per 2,000 people in the
United States — in which the brain
lacks the special neurons that help
control the transition into sleep, so
that the regular cycling is disrupted.
People with narcolepsy have sleep
attacks during the day, causing them
to suddenly fall asleep, which is
especially dangerous if they are
driving. The problem is caused by the
loss of orexin neurons in the lateral
hypothalamus. People with narcolep-
sy tend to enter REM sleep very
quickly and may even enter a dream-
ing state while still partially awake, a
condition known as hypnagogic
hallucination. Some people with
narcolepsy also have attacks in which
they lose muscle tone — similar to
what happens in REM sleep, but
while they’re awake. These attacks of
paralysis, known as cataplexy, can be
triggered by emotional experiences
and even by hearing a funny joke.
Recent research into the mech-
anisms of narcolepsy has provided
important insights into the processes
that control the mysterious transitions
between waking, slow wave sleep,
and REM sleep states. Orexin (in the
lateral hypothalamus) is critical for
preventing abnormal transitions into
REM sleep during the day. In one
study, scientists inactivated the gene
for orexin in mice and measured their
sleep patterns. They found that mice
lacking the orexin gene showed symp-
toms of narcolepsy. Similarly, humans
with narcolepsy have abnormally low
levels of orexin levels in their brain
and spinal fluid.
Because orexin levels are disrupt-
ed in narcolepsy, scientists also began
studying neurons that were neighbors
to orexin neurons to see what hap-
pened if the neighboring neurons were
activated in narcoleptic mice. Those
neurons contained melanin-concen-
trating hormone, and stimulating
them (using a technique called opto-
genetics) induced sleep — opposite to
the effect of stimulating orexin neu-
rons. A balance between the activation
of orexin neurons and their neighbor-
ing neurons could control the tran-
sition between waking and sleeping.
These findings will be important in
developing treatments for narcolepsy.
AROUSAL
Think about what happens in
your body and mind when you speak
in front of a crowd — your brain
state is very different from when
you are asleep. Perhaps you notice
changes in your breathing, heart rate,
or stomach. Maybe your thoughts are
racing or panicked. Or maybe you
are energized and excited to perform
for your audience. These are exam-
ples of the complex brain state
called arousal.
Rather than merely being awake,
arousal involves changes in the body
and brain that provide motivations
to do an action — teaching a class,
speaking in public, or focusing your
attention. People experience arousal
daily when searching for food while
hungry, or when talking with other
people (social interaction). Arousal is
also important for reproduction and
for avoiding danger.
The level of arousal varies across
a spectrum from low to high. When
arousal falls below a certain threshold
we can transition from wake to sleep,
for example. But under heightened
arousal, like intense anxiety, we cannot
reach this threshold and we stay awake.
Neurotransmitters
During arousal, the brain must de-
vote resources to specific brain regions,
much as an emergency call center
redirects resources like ambulances
and fire trucks during a fire. Specific
types of neurons in the brain regions
involved in arousal release multiple
neurotransmitters, telling the rest of
the brain and the body to be on alert.
These neurotransmitters are dopamine
(for movement), norepinephrine (for
alertness), serotonin (for emotion),
and acetylcholine and histamine,
which help the brain communicate
with the body to increase arousal.
Sensory Input
While neurotransmitters provide
the internal signals for arousal, external
signals from the outside world — like
the bright lights (visual input) and
cheering crowds (auditory input) at a
stage performance — can also stimu-
late arousal. Sensory input gets sorted
in the brain region called the thala-
mus. Often called a “sensory clearing
house,” the thalamus regulates arous-
al, receiving and processing sensory
inputs from brain regions important
in senses like vision and hearing and
relaying these inputs to the cortex.
Autonomic Nervous System
Once the brain is aroused, what
does the body do? The reticular
activating system, in the brainstem, co-
ordinates signals coming from sensory
inputs and neurotransmitters to make
sense of events in the brain and pass
that information to the rest of the
body. The reticular activating system
specifically controls the autonomic
nervous system, which affects heart
rate, blood flow, and breathing. By
controlling these automatic body pro-
cesses, the reticular activating system
6564Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Brain States 9Brain States9
sets up the physical state of arousal,
bringing important resources like oxy-
gen and nutrients to parts of the body
where they are needed.
Together, the changes that happen
in the brain and body during arous-
al enable us to be alert and focused,
which helps us process information
quickly. Using this information, we
can choose the appropriate emotional
response or physical action for a
given situation.
Sexual Arousal
Several complex brain systems and
endocrine (hormone) systems contrib-
ute to sexual arousal and behaviors, but
the brain regions, neurotransmitters,
and body systems are similar to those
involved in general arousal. The dis-
tinguishing factor is that sexual arousal
also involves hormones such as estrogen
and testosterone, which then activate
neurons that release the same neu-
rotransmitters that are released during
general arousal. Many human and ani-
mal studies report interactions between
sex hormones and neurotransmitters
dopamine, serotonin, GABA, and
glutamate. Researchers have also found
that brain regions such as the hypo-
thalamus, amygdala, and hippocampus
contain many estrogen and progester-
one receptors, and brain regions that
mediate feelings of reward (nucleus
accumbens) and emotions like pleasure
(amygdala) motivate sexual behaviors.
Overall, the primary involvement of sex
hormones is a key in defining the brain
state of sexual arousal.
ATTENTION
If you are paying
attention right now,
there should be detectable changes in
your heart rate, breathing, and blood
flow. If that sounds familiar, it’s
because those same physiological
changes occur during arousal, which
is necessary for being alert and paying
attention. As mentioned previously,
the state of arousal calls for reactions
to the environment. To make deci-
sions about what to do, you need to
focus on what’s happening in the
environment, especially involving
anything relevant to your goals. For
example, if your goal is to run away
from an angry bear, you need to be
alert and pay attention to where
you’re running so you don’t trip and
fall. Scientists have theorized that the
state of arousal speeds processing and
improves comprehension of environ-
mental details. Otherwise, your brain
would need an infinite amount of
time and energy to process all of its
sensory inputs (sounds, sights, smells,
and other feelings), because the
environment is always changing.
Focus
Even with multitasking, it is
impossible for the brain to process
all its sensory inputs. Instead, people
focus their attention on one thing at
a time. Attention is a fascinating abil-
ity, because it enables you to have so
much control and the ability to fine-
tune your focus to different locations,
times, and topics. Consider the page
you are reading right now. Although
you can see the whole page, you focus
on only one line at a time. Alterna-
tively, you can turn your attention
to the past — just minutes ago when
you were reading about arousal. Or
you can ignore the sentences alto-
gether and focus on the number of
times the word “you” occurs on this
page. Scientists recognize two types
of attention, which involve different
brain processes: voluntary (endog-
enous) attention and involuntary
(exogenous) attention.
Voluntary attention happens
when you choose what to focus on —
like finding a loved one in a crowd.
The frontal and parietal cortices of
the brain are active when you control
your attention or direct it towards a
specific object or location. Involun-
tary attention occurs when something
in the environment (like a sudden
noise or movement) grabs your atten-
tion. Involuntary attention is a dis-
traction from your chosen goals and,
in fact, researchers often use distrac-
tor objects in attention experiments.
Distractors can be emotional, like
pictures of family, or non-emotional
images that stand out from other
stimuli, like a red circle surrounded
by gray squares. Brain regions in the
right hemisphere, collectively known
as the ventral frontoparietal network,
form a system that processes new and
interesting stimuli that distract you
from the task at hand. Research on at-
tention can help us understand visual
tasks, learning, child development,
and disorders of attention.
Disorders of Attention
Paying attention for long
periods of time, such as a
3-hour lecture, can be difficult for
many people. For some people, even
focusing for a short time can be hard.
Several disorders that affect the ability
to pay attention are attention deficit
hyperactivity disorder (ADHD),
schizophrenia, prosopagnosia, and
hemineglect syndrome. It may seem
strange to regard schizophrenia as an
attention disturbance, but some
psychiatric studies suggest that it
involves a failure of selective attention.
Prosopagnosia, or face blindness, is a
cognitive disorder in which a person is
unable to recognize faces — even their
own family members. The severity of
this condition varies, and genetic
factors might be involved. Attention
disorders have various causes, but we
will focus on hemineglect syndrome,
caused by damage to the right parietal
cortex, a brain region important in
involuntary attention.
Between 50–82 percent of pa-
tients who suffer stroke in the right
hemisphere experience hemineglect
syndrome, also known as spatial ne-
glect and unilateral neglect. In these
cases, patients with neglect ignore the
left side of their visual field. Some-
times they ignore the left side of the
body and the left side of individual
objects, as well. Diagnosis of hemine-
glect syndrome can be done with a
pen and paper. For example, patients
can be instructed to draw a copy of
a picture like a butterfly or a castle,
and those patients with hemineglect
usually draw only the right half of
the picture or leave out details of the
left side. Research on patients with
hemineglect syndrome contributes to
our understanding of rehabilitation
after stroke, as well as the role of the
right parietal cortex in attention
and perception.
REST: DEFAULT MODE
NETWORK
What is the difference
between being alert
and resting while awake? During times
of rest and relaxation, you’re usually
avoiding heavy thinking or complicat-
ed tasks, and parts of the brain called
the default mode network are more
active. You may think of the default
mode network as a personal lullaby or
a playlist that turns on when you are
ready to relax. Activity of the default
mode network decreases (the lullaby
gets quieter) when you start doing or
thinking about a demanding task.
Human studies using imaging tech-
niques such as functional magnetic
resonance imaging (fMRI) and
positron emission tomography (PET)
have identified which brain regions
belong to the default mode network.
These brain areas, which are involved
in emotion, personality, introspection,
and memory, include frontal brain
regions (ventromedial prefrontal
cortex, dorsomedial prefrontal cortex,
and anterior cingulate cortex), as well
as the posterior cingulate cortex, lateral
parietal cortex, and precuneus.
Although the exact role of the
default mode network is unclear,
the functions of its “participating”
brain regions provide hints about its
purpose. Studies on emotion have
revealed that activity in the ventro-
medial PFC is directly related to
how anxious a subject feels while
performing a task — suggesting that
the default mode network may play a
role in regulating emotion and mood.
Activity in the dorsomedial PFC (a
region involved in self-referential
or introspective thoughts) increases
when a person is at rest and day-
dreaming. The dorsomedial PFC is
also involved in stream-of-conscious-
ness thoughts and thoughts about
oneself in the past, present, or future
(autobiographical self ). The roles of
these regions suggest that the default
mode network may also function
in self-reflection and our sense of
self in time.
The posterior brain regions of the
default mode network (posterior cin-
gulate cortex, lateral parietal cortex,
and precuneus) become more active
when remembering concrete mem-
ories from past experiences. These
brain regions are connected with the
hippocampus, which is important for
learning and forming memories. Both
the hippocampus and the default
mode network are more active when
a person is at rest in the evening and
less active when waking up early in
the day. These patterns indicate that
the default mode network helps to
process and remember the events
of the day.
Future studies using electrical re-
cordings from inside the human brain
can be paired with fMRI to tell us
more about the brain activity patterns
of the default mode network and how
brain regions coordinate their activity
during tasks that utilize the functions
of this network.
Scientists recognize two types of
attention, which involve different brain
processes: voluntary attention and
involuntary attention.
67Brain Factssociety for neuroscience |
The Body in Balance 10
The cells of your body are
immersed in a constantly
changing environment. The
nutrients that sustain them rise and fall
with each meal. Gases, ions, and other
solutes flow back and forth between
your cells and blood. Chemicals bind
to cells and trigger the building and re-
lease of proteins. Your cells digest food,
get rid of wastes, build new tissues,
and destroy old cells. Environmental
changes, both internal and external,
ripple through your body’s physio-
logical systems. One of your brain’s
less-visible jobs is to cope with all these
changes, keep them within a normal
range, and maintain the healthy func-
tions of your body.
The tendency of your body’s tissues
and organ systems to maintain a condi-
tion of balance or equilibrium is called
homeostasis. Homeostasis depends
on active regulation, with dynamic
adjustments that keep the environ-
ment of your cells and tissues relatively
constant. The brain is part of many
homeostatic systems, providing signals
that coordinate your body’s internal
clocks and regulating hormone secre-
tion by the endocrine system. These
functions often involve a region of the
forebrain called the hypothalamus.
CIRCADIAN RHYTHMS
Almost every cell in your body
has an internal clock that tells
it when to become active, when to rest,
and when to divide. These clocks
broker changes in many of the body’s
physiological systems over a 24-hour,
or circadian, period. For example, the
clocks cause faster pulses of peristaltic
waves in your gut during the day and
make your blood pressure dip at night.
But because these clocks are deep
inside your body and cannot detect
daylight, none of them can tell time
CHAPTER
The Body
in Balance
10
on its own. Instead, daily rhythms are
coordinated by the suprachiasmatic
nucleus (SCN), a tiny group of
neurons in the hypothalamus.
Neurons in the SCN act like a met-
ronome for the rest of the body, emit-
ting a steady stream of action potentials
during the day and becoming quiet
at night. The shift between active and
silent states is controlled by cyclic in-
teractions between two sets of proteins
encoded by your body’s “clock” genes.
Researchers first identified clock genes
in the fruit fly Drosophila melanogaster
and studied how they keep time; since
then, a nearly identical set of genes has
been found in mammals. The SCN also
tracks what time it is based on signals
it receives from photoreceptors in the
retina, which keeps its activity in sync
with the Earth’s actual day/night cycle.
That little nudge is very important be-
cause, on their own, clock proteins take
slightly more than 24 hours to complete
a full cycle. Studies of animals deprived
of light have discovered that they go to
sleep and wake up a bit later each day.
An autonomic neural pathway
ties the daily rhythmic activity of the
SCN directly to other clocks in the
body. Neurons in the SCN stimulate an
adjacent region of the brain called the
paraventricular nucleus (PVN), which
in turn sends signals down a chain of
neurons through the spinal cord to the
peripheral organs of the body. You’ve al-
ready learned how signals in part of this
neural pathway stimulate orexin neu-
rons to regulate the body’s sleep/wake
cycle. Related pathways also govern the
secretion of melatonin, a hormone that
influences sleep behaviors. Specifically,
electrical activity originating in the SCN
enters the PVN’s neural network and
sends signals up to the pineal gland, a
small pinecone-shaped gland embedded
between the cerebral hemispheres. The
pineal gland secretes melatonin into the
bloodstream at night. Melatonin binds
to cells in many tissues, and although
it has no direct effect on clock gene
expression in the SCN, its systemic
effects seem to reduce alertness and
increase sleepiness. Light exposure trig-
gers signals that stop melatonin secre-
tion, promoting wakeful behaviors.
Together, these signals keep all the
body’s clocks synchronized to the same
24-hour cycle. Coordinated body clocks
enable your body’s physiological systems
to work together at the right times.
When your body prepares to wake from
sleep, 1) levels of the stress hormone
cortisol peak in the blood, releasing
sugars from storage and increasing
appetite, and 2) core body temperature
begins to drift upwards, raising your
body’s metabolic rate. These events,
synchronized with others, prepare your
body for a new day’s activity.
Desynchronizing the body’s phys-
iological clocks can cause noticeable
and sometimes serious health effects.
You might have experienced a familiar
example of circadian rhythm distur-
bance: jet lag. After crossing many time
zones in a short time period, a person’s
patterns of wakefulness and hunger
are out of sync with day and night.
Exposure to the local day/night cycle
resets the brain and body, but it can
take several days to get fully resynchro-
nized. Circadian rhythms can also be
disturbed by situations like late-shift
jobs or blindness, which decouple nor-
mal daylight signals from wake/sleep
cycles. Long-term circadian disruptions
are associated with health problems
including weight gain, increased rates
of insomnia, depression, and cancers.
HORMONES,
HOMEOSTASIS,
AND BEHAVIOR
Neurons can quickly
deliver the brain’s
messages to precise targets in the body.
Hormones, on the other hand, deliver
messages more slowly but can affect a
larger set of tissues, producing large-
scale changes in metabolism, growth,
and behavior. The brain is one of the
tissues that “listens” for hormonal
signals — neurons throughout the
brain are studded with hormone
receptors — and the brain’s responses
play an important part in regulating
hormone secretion and changing
behaviors to keep the body systems in
Coordinated body clocks enable
your body’s physiological systems
to work together at the right times.
Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
The Body in Balance 10The Body in Balance1068
equilibrium. The brain regions
involved in hormone release are called
the neuroendocrine system.
The hypothalamus oversees the
production and release of many hor-
mones through its close ties to the pi-
tuitary gland. The paraventricular and
supraoptic nuclei of the hypothalamus
send axons into the posterior part of
the pituitary gland; activation of spe-
cific neurons releases either vasopressin
or oxytocin into capillaries within the
pituitary. Both of these molecules act
as neurotransmitters inside the brain,
but they are also hormones that affect
distant tissues of the body. Vasopressin
(also called antidiuretic hormone) in-
creases water retention in the kidneys
and constricts blood vessels (vasocon-
striction). Oxytocin promotes uterine
contractions during labor and milk
release during nursing.
Other hypothalamic regions send
axons to a capillary-rich area above the
pituitary called the median eminence.
When these neurons are activated,
they release their hormones into the
blood. These releasing (and inhibiting)
hormones travel through local blood
vessels to the anterior pituitary, where
they trigger (or inhibit) secretion of
a second specific hormone. Of the
seven anterior pituitary hormones,
five are trophic hormones — these
travel in the bloodstream to stimulate
activity in specific endocrine glands
(thyroid, adrenal cortex, ovaries, etc.)
throughout the body. The remaining
two hormones act on non-endocrine
tissues. Growth hormone stimulates
the growth of bone and soft tissues,
and prolactin stimulates milk produc-
tion by the breasts. Hormones released
from the anterior pituitary influence
growth, cellular metabolism, emotion,
and the physiology of reproduction,
hunger, thirst, and stress.
Many hormones produced by
the pituitary and its target endocrine
glands affect receptors inside the brain
— thus, these hormones can alter
neuronal function and gene transcrip-
tion in the hypothalamus. The effect
is to reduce the amount of hormone
released by the hypothalamus when
those circuits become active. These
negative feedback loops enable precise
doses of hormones to be delivered to
body tissues, and ensure that the hor-
mone levels are narrowly regulated.
One of these three-hormone
cascades regulates reproduction in
mammals. Its underlying pattern is the
same in both sexes: 1) gonadotropin-
releasing hormone (GnRH) from the
hypothalamus makes the anterior pi-
tuitary release 2) luteinizing hormone
(LH) and follicle stimulating hormone
(FSH), which in turn make the gonads
secrete 3) sex hormones and start the
development of mature eggs or sperm.
The neuroendocrine system maintains homeostasis, the body’s normal equilibrium, and
controls the response to stress. The adrenal gland releases the stress hormones norepineph-
rine, epinephrine, and cortisol, which quicken heart rate and prepare muscles for action.
Corticotrophin releasing hormone (CRH) is released from the hypothalamus and travels to the
pituitary gland, where it triggers the release of adrenocorticotropic hormone (ACTH). ACTH
travels in the blood to the adrenal glands, where it stimulates the release of cortisol.
69
Sex hormones, in turn, attach to
receptors in the hypothalamus and an-
terior pituitary and modify the release
of the hypothalamic and pituitary
hormones. However, sex hormones
regulate these feedback loops differ-
ently in males and females.
Male sex hormones induce simple
negative feedback loops that reduce
the secretion of gonadotropin-releas-
ing hormone, luteinizing hormone,
and follicle stimulating hormone. The
interplay among these hormones creates
a repetitive pulse of GnRH that peaks
every 90 minutes. The waxing and wan-
ing of GnRH keeps testosterone levels
relatively steady within body tissues,
maintains male libido, and keeps the
testes producing new sperm each day.
Female feedback patterns are
more complex. Over the course of the
month-long menstrual cycle, female sex
hormones exert both positive and nega-
tive feedback on GnRH, FSH, and LH.
When circulating levels of the
female sex hormones estrogen and
progesterone are low, rising follicle
stimulating hormone levels trigger egg
maturation and estrogen production.
Rising estrogen levels induce luteiniz-
ing hormone levels to rise. As the levels
of female sex hormones rise, they exert
negative feedback on FSH secretion,
limiting the number of eggs that ma-
ture in a month, but positive feedback
on LH, eventually producing the LH
surge that triggers ovulation. After
ovulation, high serum levels of sex hor-
mones again exert negative feedback on
GnRH, FSH, and LH which in turn
reduces ovarian activity. Levels of fe-
male sex hormones therefore decrease,
allowing the cycle to start over again.
Many other hormones are not
regulated by the pituitary gland,
but are released by specific tissues in
response to physiological changes. The
brain contains receptors for many of
these hormones but, unlike pituitary
hormones, it does not directly regulate
their secretion. Instead, when these
hormones bind to receptors on neu-
rons, they modify the output of neural
circuits, producing behavioral changes
that have homeostatic effects. One
example of this is a pair of hormones
called leptin and ghrelin.
Leptin and ghrelin change eating
behavior by regulating food intake
and energy balance. Both hormones
affect hunger, and both are released
in response to changes in an animal’s
internal energy stores. However, they
have different effects on the circuits
they regulate. Ghrelin keeps the
body fed. Released by the wall of the
gastrointestinal tract when the stom-
ach is empty, ghrelin activates hunger
circuits in the hypothalamus that drive
a search for food. Once the stomach is
full, ghrelin production stops, reduc-
ing the desire to eat. In contrast, leptin
helps maintain body weight within a
set range. Leptin is produced by fat
cells and is released when fat stores are
large. When it binds to neurons in the
hypothalamus, leptin suppresses the
activity of hunger circuits and reduces
the desire to eat. As fat stores are used
up, leptin levels decline, driving be-
havior that makes an animal eat more
often and replenish its fat stores.
STRESS
Your body reacts in stereotyped
ways when you feel threatened. You
breathe faster, your heartbeat speeds
up, your muscles tense and prepare
for action. These reactions may have
helped our ancestors run from preda-
tors, but any stressful situation — ar-
guing with your parents, a blind date, a
looming deadline at work, abdominal
cramps, discovering your apartment
was robbed, trying karaoke for the first
time — has the potential to set them
off. Scientists call this reaction the stress
response, and your body turns it on to
some degree in response to any external
or internal threat to homeostasis.
The Stress Response
The stress response weaves togeth-
er three of the brain’s parallel com-
munication systems, coordinating the
activity of voluntary and involuntary
nervous systems, muscles, and metabo-
lism to achieve one defensive goal.
Messages sent to muscles through
the somatic (voluntary) nervous system
prime the body to fight or run from
danger (the fight-or-flight response).
Messages sent through the autonomic
(involuntary) nervous system redirect
nutrients and oxygen to those mus-
cles. The sympathetic branch tells the
adrenal medulla to release the hor-
mone epinephrine (also called adren-
aline), which makes the heart pump
faster and relaxes the arterial walls that
supply muscles with blood so they can
respond more quickly. At the same
time, the autonomic system’s parasym-
pathetic branch restricts blood flow
to other organs including the skin,
gonads, digestive tract, and kidneys.
Finally, a cascade of neuroendocrine
hormones originating in the hypothal-
amus and anterior pituitary circulates
in the bloodstream, affecting processes
like metabolic rate and sexual func-
tion, and telling the adrenal cortex to
release glucocorticoid hormones —
like cortisol — into the blood.
Glucocorticoid hormones bind to
many body tissues and produce wide-
spread effects that prepare the body to
respond to potential threat. These hor-
mones stimulate the production and
release of sugar from storage sites such
as the liver, making energy available to
Brain Facts society for neuroscience|
The Body in Balance1070
muscles. They also bind to brain areas
that ramp up attention and learning.
And they help inhibit nonessential
functions like growth and immune
responses until the crisis ends.
It’s easy to imagine how (and why)
these physiological changes make your
body alert and ready for action. But
when it comes to stress, your body can’t
tell the difference between the danger
of facing down a bull elephant and the
frustration of being stuck in traffic.
When stress is chronic, whatever its
cause, your adrenal glands keep pump-
ing out epinephrine and glucocorti-
coids. Many animal and human studies
have shown that long-term exposure to
these hormones can be detrimental.
Chronic Stress
Overexposure to
glucocorticoids can
damage a wide range of physiological
systems. It can cause muscles to
atrophy, push the body to store energy
as fat, and keep blood sugar abnormal-
ly high — all of these can worsen the
symptoms of diabetes. Overexposure
to glucocorticoids also contributes to
the development of hypertension (high
blood pressure) and atherosclerosis
(hardening of the arteries), increasing
the risk of heart attacks. Because the
hormones inhibit immune system
function, they also reduce resistance to
infection and inflammation, some-
times pushing the immune system to
attack the body’s own tissues.
Chronic stress can also have specif-
ic negative effects on brain tissue and
function. Persistently high levels of
glucocorticoids inhibit neuron growth
inside the hippocampus, impairing the
normal processes of memory forma-
tion and recall. Stress hormones can
also suppress neural pathways that are
normally active in decision-making
and cognition, and speed the deteri-
oration in brain function caused by
aging. They may worsen the damage
caused by a stroke. And they can
lead to sleep disorders — cortisol is
also an important wakeful signal in
the brain, so the high cortisol levels
due to chronic stress may delay sleep.
Stress-induced insomnia can then start
a vicious cycle, as the stress of sleep
deprivation leads to the release of even
more glucocorticoids.
The effects of chronic stress may
even extend beyond a single indi-
vidual, because glucocorticoids play
important roles in brain development.
If a pregnant woman suffers from
chronic stress, the elevated stress hor-
mones can cross the placenta and shift
the developmental trajectory of her
fetus. Glucocorticoids are transcription
factors, which can bind to DNA and
modify which genes will be expressed
as proteins. Studies with animal mod-
els have shown that mothers with high
blood levels of glucocorticoids during
pregnancy often have babies with low-
er birth weights, developmental delays,
and more sensitive stress responses
throughout their lives.
Because metabolic stressors such as
starvation induce high glucocorticoid
levels, it’s been suggested that these
hormones might help prepare the fetus
for the environment it will be born
into. Tough, stressful environments
push fetuses to develop stress-sensitive
“thrifty” metabolisms that store fat eas-
ily. Unfortunately, these stress-sensitive
metabolisms increase a person’s risk of
developing chronic metabolic diseases
like obesity or diabetes, especially if they
subsequently grow up in lower-stress
environments with plentiful food.
The effects of stress can even be
passed to subsequent generations by
epigenetic mechanisms. Chronic stress
can change the markers on DNA
molecules that indicate which of the
genes in a cell are expressed and which
are silenced. Some animal studies
indicate that when changes in markers
occur in cells that develop into eggs or
sperm, these changes can be passed on
and expressed in the animal’s offspring.
Further research might reveal wheth-
er chronic stress has similar effects
in humans, and whether inheriting
silenced or activated genes contributes
to family histories of cancer, obesity,
cardiovascular, psychiatric, or neurode-
velopmental disease.
Chronic stress can also have
specific negative effects on brain
tissue and function.
AUTISM SPECTRUM
DISORDERS
Autism is often considered a
childhood condition, although
many of its symptoms persist lifelong.
Some people with autism also have
mood and anxiety disorders, seizures,
intellectual disability, attention deficit
hyperactivity disorder (ADHD), and
obsessive-compulsive disorder (OCD).
However, more than 40 percent of
people with autism have normal or
above-average intelligence. With
symptoms that range from mildly to
severely disabling, autism is considered
a spectrum. Autism spectrum disorders
(ASD) are diagnosed based on two
main criteria: impaired social commu-
nication and interaction, and repetitive
behaviors or narrow, obsessive inter-
ests. For example, some people on the
autism spectrum are unable to speak,
while others are socially awkward but
highly articulate. Many adults with an
autism diagnosis think of their autism
as a strength — enabling or motivating
them to develop deep expertise in an
area or a different perspective on the
world — rather than a disorder that
needs to be cured.
Currently, 1 of every 68 American
8-year-olds is estimated to meet the
diagnostic criteria for an autism spec-
trum disorder. The prevalence of ASD
has risen dramatically since the 1970s,
but it is unclear whether changes to
diagnostic criteria and wider recogni-
tion of ASD have contributed to the
increase in diagnoses.
Four to five times more boys
than girls are diagnosed with autism,
although it is not clear whether some
of that pattern is because of underdi-
agnosis of girls. Environmental factors
such as parents having children later in
life, fever and infection during preg-
nancy, and premature birth have been
CHAPTER
Childhood
Disorders
11
71Brain Facts
7372Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Childhood Disorders 11Childhood Disorders11
linked to an increased risk of autism
in children. A huge number of studies
have found no connection between
childhood vaccination and the increase
in autism diagnoses.
Autism is believed to be at least
partially driven by genetics, but how
do scientists know that for sure? One
low-tech approach uses twin studies:
If one of a pair of identical twins
receives an autism diagnosis, the other
twin has greater than a 50 percent
chance of also being diagnosed with
ASD. Children who have an older
sibling on the spectrum also have a
higher likelihood of being diagnosed
with autism — nearly one in five also
receives a diagnosis of ASD.
The genetics of autism is very
complicated in most cases, involving
dozens (or more) of genes, leading
to a unique condition in nearly
every person. Recently, however,
high-throughput genomic analyses
have broadened the pool of potential
genes, revealed their roles in the body,
and suggested possible new therapies.
It appears that many genes, each
with a small effect, contribute to the
inheritance of most ASDs. But such
small effects make these genes hard to
identify in genome-wide association
studies. Scientists are now looking at
the rare variants associated with ASD.
These afflict fewer people with ASD,
but their effects are larger and easier to
detect. Some of these rare mutations
are in single genes whose impairment
is already known to cause intellectu-
al disability and social dysfunction.
These genes include FMR1 (codes for
fragile X mental retardation protein,
but its non-mutant form is needed
for normal cognitive development);
PTEN (codes for a tumor suppressor
enzyme that regulates cell division,
so cells don’t divide or grow too fast);
and TSC1 or TSC2 (tuberous sclerosis
complex 1 and 2), which also code for
proteins that help control cell growth
and size. Between 50 to 60 percent of
people with fragile X syndrome and
approximately 40 percent of people
with tuberous sclerosis complex have
ASD. Children with a variant of the
gene NF-1 develop tumors in child-
hood (neurofibromatosis) and a 2011
study found that nearly 10 percent
met the criteria for autism.
Intriguingly, these ASD-related
genes influence a major signaling
pathway for regulating cell metabo-
lism, growth, and proliferation, the
mTOR pathway. This suggests a very
real potential for treating autism with
drugs that target the mTOR pathway.
For example, mouse models with
mutations in PTEN show traits simi-
lar to humans with these gene vari-
ants: altered sociability, anxiety, and
repetitive behaviors. These behaviors
can be relieved or reversed by drugs
that inhibit the mTOR pathway.
Clinical trials of these drugs (rapamy-
cin and lovastatin) are underway.
Despite this progress, autism
genetics is so complicated that it can’t
be used to diagnose the condition.
And unlike diabetes, kidney disease, or
thyroid disease, there are no biochem-
ical or other biomarkers of autism.
Currently, autism diagnosis is based on
behavioral analysis, but efforts are un-
derway to use more objective criteria
such as tracking eye movements and
functional neuroimaging, which can
even be done in infants.
How early can autism be detect-
ed? Parents often notice develop-
mental issues before their child’s first
birthday, and autism can be reliably
diagnosed based on behavioral
characteristics at age 2. Despite these
possibilities for early detection, most
American children aren’t diagnosed
until they’re about 4½ years old. With
evidence mounting that interventions
are more effective the earlier they be-
gin, researchers are hoping that more
objective measures will enable earlier
diagnoses and interventions.
Although the molecular caus-
es and characteristics of autism are
unclear, it appears that the condition
results from unusual cellular develop-
ment within the cerebral cortex — a
brain region that is crucial to mem-
ory, attention, perception, language,
and other functions. Both white
and gray matter of the brain show
consistent, but subtle, alterations in
people with ASD. Long-term studies
also have found that a minority of
children on the autism spectrum have
abnormally large brain volumes and
faster brain growth. Other toddlers
with autism have shown unusual
development and network inefficien-
cies at the back of the cerebral cortex.
There is evidence that some atypical
activity occurs in the cortex of people
with ASD from older childhood into
adulthood, and information might
not be integrated in the usual way
across distributed brain networks.
At this point, no medications have
been proven to reverse autism. Some
people get symptomatic relief from
drugs designed for other uses, such as
anxiety conditions, and several stud-
ies have reported social benefits from
treatment with oxytocin — a hormone
known to improve social bonding —
but the findings have been mixed. For
this challenging disorder, behavioral
therapies are still the only proven treat-
ments for autism, and early interven-
tions are the most effective.
ATTENTION DEFICIT
HYPERACTIVITY DISORDER
Attention deficit hyperactivity
disorder (ADHD) is one of the most
commonly diagnosed childhood
conditions. In 2014, approximately
11 percent of American parents with
a child between the ages of 4 and 17
reported that their son or daughter
had received an ADHD diagnosis. In
at least 30 percent of those diagnosed
with ADHD, the disorder continues
into adulthood.
ADHD is usually characterized by
inattentiveness, as well as hyperactivity
or impulsive behaviors. Although all
young children can be hyperactive,
impulsive, and inattentive from time
to time, these symptoms are more
extreme and last longer in children
with ADHD. They often struggle to
form strong friendships, and their
grades in school can reflect their
behavior instead of their academic
ability. Executive functions, such as
finishing what they start, remembering
to bring homework back to school,
and following multistep directions, can
be especially challenging for those with
ADHD. Young people with ADHD
also have lower rates of high school
graduation and a higher risk of suicide.
No objective diagnostic test exists
for ADHD, so diagnosis requires a
comprehensive evaluation, including
a clinical interview and parent and
teacher ratings. Because problems
with attention and hyperactivity can
be caused by other conditions such as
depression, sleep issues, and learning
disorders, careful evaluation is always
needed to determine whether ADHD
is truly the cause of the symptoms. To
warrant an ADHD diagnosis, atten-
tion and behavioral problems must be
severe enough that they interfere with
normal functioning. In addition, the
behavioral issues must be present in
more than one context — not only at
home or at school, but in both settings.
Although ADHD tends to run in
families, no well-defined set of genes
is known to be responsible for the
condition. Environmental risk factors,
such as extreme early adversity, expo-
sure to lead, and low birthweight, can
also be involved. People with ADHD
do not demonstrate any obvious brain
alterations, but research has found that
people with ADHD might have dif-
ferences in the structure of brain cells
and in the brain’s ability to remodel
itself. Some people with ADHD show
unusual activity in brain cells that re-
lease dopamine, a chemical messenger
involved in rewarding behavior.
ADHD has no cure, but treat-
ments include drugs, behavioral
interventions, or both. Interestingly,
ADHD medications include stimu-
lants such as methylphenidate, as well
as newer, non-stimulant drugs. The
drugs are available in long-acting for-
mulations so children do not have to
interrupt the school day to take their
medication. Determining the right
drug and the right dose might require
a period of experimentation and sup-
port from a specialist, since dosage is
adjusted to how fast a child metaboliz-
es the drug, and to minimize the side
effects. Nevertheless, most children
with ADHD are diagnosed and treated
by their pediatricians. Effective behav-
ioral treatments include organizational
support, exercise, and meditation.
DOWN SYNDROME
Down syndrome is named for the
English physician who first described
it in 1866, but nearly 100 years passed
before scientists determined what
caused the condition: possessing an
extra copy of all or part of the 21st
chromosome. People with this syn-
drome have three copies of this genetic
material, instead of two. In some cases,
the extra copy, or trisomy, does not
occur in every cell, producing what’s
known as mosaicism. Currently, about
250,000 people in the United States
are living with Down syndrome.
There is no clear cause of the
genetic glitch, although maternal age
is a major risk factor for Down syn-
drome. Mothers older than 40 are 8.5
times more likely to have a child with
Down syndrome than mothers aged
20 to 24. Advanced paternal age has
also been linked to higher incidence
of Down syndrome.
The genetics of autism is very
complicated in most cases, involving
dozens of genes, leading to a unique
condition in nearly every person.
7574Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Childhood Disorders 11Childhood Disorders11
Since late 2011, fetuses can be
screened for Down syndrome using the
mother’s blood. In the past, the risk of
test procedures meant that only older
mothers (whose likelihood of having a
Down syndrome child was known to
be higher) should be screened. Younger
mothers didn’t know until delivery
whether their child would have Down
syndrome. The new blood test, unlike
amniocentesis and chorionic villus
sampling, poses no risk to the baby, so
it can also be used for younger moth-
ers whose chance of having a child
with Down syndrome is quite small.
Children born with Down syn-
drome have distinctive facial features,
including a flattened face and bridge
of the nose, eyes that slant upward,
and small ears. They usually have small
hands and feet, short stature, and poor
muscle tone as well. The intellectual
abilities of people with Down syndrome
are typically low to moderate, although
some graduate from high school and
college, and many successfully hold
jobs. Other symptoms of Down
syndrome can include hearing loss and
heart defects, and virtually everyone
born with Down will develop early-on-
set Alzheimer’s disease, often in their
40s or 50s. Chromosome 21 contains
the gene that encodes amyloid precur-
sor protein (APP), an Alzheimer’s dis-
ease risk factor, and possessing an extra
copy of this gene might cause the early
onset of this fatal disease. Interestingly,
people with mosaic Down syndrome
seem to have milder symptoms and are
more likely to live past 50.
There is no real treatment for
Down syndrome, nor any clear expla-
nation of what occurs in the brain.
Poor connections among nerve cells in
the hippocampus, the part of the brain
involved in memory (and the first brain
area affected by Alzheimer’s disease),
are believed to be a key factor in brain
or intellectual differences in Down syn-
drome. Dysfunction in the mitochon-
dria, the cell’s power plants, might also
play a role in development of related
disorders that involve energy metabo-
lism, such as diabetes and Alzheimer’s.
Scientists have grown stem cells
from fetuses with Down syndrome and
used them to test potential treatments
and confirm which molecular path-
ways are involved in the condition. In
one such laboratory study, researchers
took a gene that normally inactivates
the second X chromosome in female
mammals and spliced it into a stem cell
that had three copies of chromosome
21. In these cells, the inactivation gene
muted the expression of genes on the
extra chromosome 21, believed to con-
tribute to Down syndrome. Although
this is a long way from any clinical ap-
plications, the model is being used to
test the changes and cellular problems
that occur with the tripling of the 21st
chromosome, in hopes of eventually
finding a treatment.
DYSLEXIA
Dyslexia is the most common
and best-studied of the
learning disabilities, affecting as many
as 15 to 20 percent of all Americans.
People with dyslexia have a pro-
nounced difficulty with reading
despite having normal intelligence,
education, and motivation.
Symptoms include trouble with
pronunciation, lack of fluency, diffi-
culty retrieving words, poor spelling,
and hesitancy in speaking. People with
dyslexia might need more time to
respond orally to a question and might
read much more slowly than their peers.
Dyslexia is usually diagnosed in elemen-
tary school, when a child is slow to read
or struggling with reading. Although
reading skills and fluency can improve,
dyslexia persists lifelong.
Deciphering printed letters and
words and recalling their sounds and
meaning involves many areas of the
brain. Brain imaging studies indicate
these areas can be less well connected
in people with dyslexia. One of these
areas is a region on the left side of the
brain called the “word-form area,”
which is involved in the recognition of
printed letters and words. People with
dyslexia also show less brain activity in
the left occipitotemporal cortex, which
is considered essential for skilled read-
ing. Researchers believe that the brain
differences are present before the read-
ing and language difficulties become
apparent — although it is possible
that people with dyslexia read less and,
therefore, their brains develop less in
regions associated with reading. Those
with dyslexia appear to compensate for
reduced activity on the left side of the
brain by relying more heavily on the
right side.
Genetic analyses have revealed a
handful of susceptibility genes, with
animal models suggesting that these
genes affect the migration of brain
cells during development, leading to
differences in brain circuitry. Dyslexia
runs in families, with roughly half of
dyslexics sharing the condition with
a close relative. When one twin is
diagnosed with dyslexia, the second
twin is found to have the condition
55-70 percent of the time. But the
genetics of dyslexia is complex, and
likely involves a wide range of genes
and environmental factors.
Treatment for dyslexia involves
behavioral and educational interven-
tion, especially exercises like breaking
words down into sounds and linking
the sounds to specific letter patterns.
Some researchers use a child’s ability
to rapidly and automatically name
things as an early indicator of dyslexia.
This rapid automatic naming, and the
ability to recognize and work with the
sounds of language, are often impaired
in people with dyslexia. Both skills can
be used in preschoolers and kinder-
gartners to predict their later reading
skills. Research suggests that treat-
ments targeting phonology, as well as
multiple levels of language skills, show
the greatest promise.
EPILEPSY
If someone has two or more
seizures that cannot be explained by a
temporary underlying medical condi-
tion such as a high fever or low blood
sugar, their medical diagnosis will be
“epilepsy” — from the Greek words
meaning to “seize,” “attack,” or “take
hold of.” About 1 percent of Ameri-
can children and 1.8 percent of adults
have been diagnosed with this brain
disorder. Seizures result from irregular
activities in brain cells that can last
five or more minutes at a time. Some
seizures look like staring spells, while
others cause people to collapse, shake,
and become unaware of what is going
on around them. The pattern of symp-
toms and after-seizure brain recordings
using EEGs are used to distinguish
between different types of epilepsy and
determine whether the true cause of
the seizures is epilepsy or a different
medical condition.
Seizures are classified by where
they occur in the brain. General-
ized seizures affect both sides of the
brain. They include absence or petit
mal seizures, which can cause rapid
blinking or a few seconds of staring
into space, and tonic-clonic or grand
mal seizures, which can make some-
one fall, have muscle spasms, cry out,
and/or lose consciousness. Focal or
partial seizures are localized to one
area of the brain. A simple focal sei-
zure can cause twitching or a change
in sensation, triggering strange smells
or tastes. Complex focal seizures can
leave a person confused and unable
to answer questions or follow direc-
tions. A person can also have so-called
secondary generalized seizures, which
begin in one part of the brain but
spread to become generalized seizures.
In some patients with severe epilepsy,
multiple types of seizure can occur at
the same time.
Epilepsy has many possible causes
and thus is considered a spectrum
rather than a single disorder. Causes
include premature birth, brain trauma,
and abnormal development due to
genetic factors. Attributes of epilepsy
patients such as head size, movement
disorders, and family history suggest
that genetics is involved.
Seizures can also accompany or
cause intellectual or psychiatric prob-
lems. For example, some seizures may
suppress the growth of dendrites, leav-
ing the person emotionally unsettled
or less able to learn.
Treatments for epilepsy are direct-
ed toward controlling seizures with
medication or diet. For most patients,
a single medication is enough to
control seizures, although a significant
minority cannot get adequate control
from drugs. About half of epilepsy pa-
tients, particularly those with general-
ized epilepsy, can reduce their seizures
by eating a ketogenic diet, which relies
heavily on high-fat, low-carbohydrate
foods, although it’s unclear why this
diet is effective. For severe cases that
are not relieved by medication, doctors
might recommend surgery to remove
or inactivate the seizure-initiating part
of the brain. In the most severe cases,
if one side of the brain triggers sei-
zures on the other side, surgeons may
perform “split-brain surgery,” cutting
the corpus callosum, a thick band of
white matter that connects the two
sides of the brain. Once their seizures
are controlled, people with epilepsy
can resume their normal lives.
Epilepsy has many possible causes
and thus is considered a spectrum
rather than a single disorder. | Base your entire response on the document I gave you. I need to know the absolute basic information about what is being said here.
What are some effects of overexposure to glucocorticoids?
5756Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Adult & Aging Brain 8Adult & Aging Brain8
memory, which requires a high degree
of synaptic plasticity. The loss of thin
dendritic spines could impair neuro-
nal communication and contribute
to cognitive decline. So far, direct
evidence of their role in cognitive
decline is lacking, and more studies
are needed.
Finally, the formation of new
neurons also declines with age.
Although neurogenesis was once
believed to halt after birth, we now
know of two brain regions that con-
tinue to add new neurons through-
out life: the olfactory bulbs and the
dentate gyrus of the hippocampus.
Studies suggest that the rate of neu-
rogenesis plummets with age in mice,
but recent human studies suggest a
more modest decline. It is not yet
clear whether neurogenesis apprecia-
bly affects cognition in the aging hu-
man brain, but mouse studies indicate
that strategies that boost neurogenesis
can enhance cognitive function.
Chemical Changes
The amount of neurotransmit-
ters and the number of their recep-
tors might also decline with age.
Several studies have reported that
less dopamine is synthesized in the
aged brain, and there are fewer re-
ceptors to bind the neurotransmitter.
Less robust evidence indicates that
the amount of serotonin might also
decline with age.
WHY DOES THE BRAIN AGE?
From cortical thinning to the
loss of dendritic spines, you’ve
seen how the brain ages. But what
causes these changes? Many different
theories have been advanced to
explain why neurons, and cells in
general, age. One possibility is that
changes in gene expression play a role.
Researchers have found that genes
important for synaptic plasticity are
expressed less in the brains of older
people than in the brains of younger
adults. The underexpressed genes also
showed more signs of damage.
Oxidative Stress
and DNA Damage
DNA damage that accumulates
over a lifetime could contribute
to aging processes throughout the
brain and body, and DNA damage
due to oxidative stress has received a
great deal of attention. Every cell in
your body contains organelles called
mitochondria, which function a bit
like cellular power plants, carrying
out chemical reactions that provide
energy for cell use. Some of these
metabolic reactions produce harmful
byproducts called free radicals, highly
reactive molecules which, if left un-
checked, can destroy fats and proteins
vital to normal cell function and can
damage DNA as well.
Your body has natural defense
mechanisms to neutralize free radi-
cals. Unfortunately, these mechanisms
decline with age, leaving aging tissues
more vulnerable to oxidative damage
by the free radicals. Studies of brain
cells have shown that damage to their
mitochondrial DNA accumulates with
age. In addition, the brains of people
with mild cognitive impairment and
Alzheimer’s disease show more signs
of oxidative damage than the brains of
healthy people. Studies in rodents also
link increased oxidative damage to
memory impairments.
Your brain is one of the most
metabolically active organs, demand-
ing around 20 percent of the body’s
fuel. Its enormous energy require-
ments might make the brain even
more vulnerable than other tissues
to the metabolic changes that occur
in aging. While the brain’s energy
demands remain high, its energy
supply can no longer keep pace; the
brain’s ability to take up and use glu-
cose diminishes and mitochondrial
metabolism declines.
Immune Dysfunction
Immune dysfunction often occurs
in conjunction with the metabolic
changes seen in aging. Microglia, the
brain’s resident immune cells, per-
form many important jobs: defending
against pathogens, cleaning up cellular
debris, and helping maintain and re-
model synapses. These inflammatory
responses are protective, but a pro-
longed inflammatory state is harmful
to brain health. Microglia become
more reactive with age, increasing the
inflammatory response in the brain
while also damping production of
helpful anti-inflammatory molecules.
Mouse studies suggest that excessive
microglial activity also contributes to
cognitive impairments.
Impaired Protein Recycling
We know that excessive buildup
of abnormal proteins in the brain
contributes to age-related neurode-
generative diseases like Alzheimer’s
and Parkinson’s. Buildup of proteins
and other cell components can also
contribute to cellular degeneration
in the healthy brain. Cells normally
break down and recycle damaged
proteins and molecules, using a pro-
cess that is usually efficient but not
perfect. Over time, damaged mole-
cules can build up in cells and prevent
them from functioning normally.
Because neurons in the brain are not
replaced as often as cells in other
parts of the body (for example, bone
marrow, intestinal lining, hair folli-
cles), brain cells might be even more
vulnerable to this buildup of damaged
molecules. Also, the cellular ma-
chinery involved in breakdown and
recycling processes degrades with age,
reducing the efficiency of the “waste
removal” systems.
Finally, remember that changes
in the aging brain occur within the
context of other changes throughout
the body. Researchers speculate that
worsening cardiovascular health, for
example, could contribute to, or even
drive, many changes seen in the
aging brain.
HEALTHY AGING
We have learned how
the brain changes with
age and why these changes can occur.
Now let’s turn our attention to a
growing field in neuroscience that
explores ways to slow these changes
and preserve healthy brain function.
Diet and Exercise
Strong evidence now suggests
that habits and choices that keep your
body healthy also benefit your mind.
Poor cardiovascular health puts a
person at increased risk of age-related
cognitive impairment. Diets rich in
vegetables, fruits, and whole grains,
and low in meat and dairy products,
can reduce cardiovascular risk factors
linked to cognitive impairment, such
as high blood pressure and high levels
of LDL cholesterol. Indeed, observa-
tional studies have found that people
who follow plant-rich diets such as
the Mediterranean diet or Dietary
Approaches to Stop Hypertension
(DASH) are less likely to develop
cognitive decline and dementia.
Specific nutrients have been linked
to improved cognitive performance
and lower rates of dementia. Anti-
oxidants, such as vitamins C and E,
flavonoids, and omega-3 fatty acids
have received considerable attention,
with observational studies showing
that high dietary intake of these
compounds is beneficial. However,
the results of lifestyle intervention
studies using supplements have been
more mixed. Finally, caloric restriction
— substantially reducing the number
of calories eaten without leading to
malnutrition — has been linked to
Many different theories have been
advanced to explain why neurons,
and cells in general, age. Synapses begin to weaken as a person ages, which can contribute to normal
cognitive decline.
Brain Facts society for neuroscience|
Adult & Aging Brain85958
improved cognitive health as well as a
longer lifespan.
Growing evidence shows that
aerobic exercise can improve cognitive
function and offset some of the de-
clines seen in aging. Numerous studies
have found that people who engage
in regular physical activity show
improved learning, improved mem-
ory, and a reduced risk of developing
dementia. Physical activity might even
slow the progression of Alzheimer’s
disease and dementia, and higher levels
of physical activity have been linked
to improvements in some markers of
structural brain health, such as reduced
cortical thinning and less shrinkage in
the hippocampus.
Exercise exerts its neuroprotec-
tive effects in the brain by improving
neuroplasticity — the brain’s ability to
form and reorganize connections be-
tween neurons in response to changes
in behavior and environment. Scien-
tists also believe that exercise increases
neurogenesis (the formation of new
nerve cells) which, in turn, enhances
neuroplasticity. Evidence from rodent
studies confirms that exercise increases
neurogenesis: Older mice allowed to
run on a wheel have higher rates of
neurogenesis in the hippocampus than
sedentary mice, and they perform bet-
ter on learning and memory tests. Ex-
ercise can also improve blood flow and
increase production of neurotrophic
factors that support new neurons and
synapses. For humans, starting exercise
later in life can be beneficial, but the
studies suggest that adopting an exer-
cise program earlier in life could yield
even more neuroprotective benefits.
Mental Stimulation
and Social Networks
Mental stimulation and large so-
cial networks can also improve cogni-
tive function in aging. In lab studies,
mice housed in cognitively stimulat-
ing environments with many oppor-
tunities for social interaction perform
better on learning and memory tests
as they age compared to mice housed
in standard cages. Much like physical
exercise, cognitive stimulation appears
to enhance neuroplasticity by increas-
ing neurogenesis and boosting levels
of important neurotrophic factors.
People who perform cognitive-
ly-demanding work or engage in
stimulating activities such as reading,
solving puzzles, or playing a musical
instrument have lower rates of cog-
nitive decline with aging. An active
social life has also been shown to be
beneficial for cognition as we age.
Neuroscientists have learned a
lot about the aging brain — how it
changes, why it changes, and how
to maintain healthy cognitive func-
tioning as we age. Even so, many
questions remain. Answers to those
questions could identify new strate-
gies for protecting the brain, not only
in our later years, but throughout
our lives.
Exercise has been shown to increase neurogenesis in the adult brain, and can slow the
cognitive decline associated with aging.
iStock.com/artyme83.
Have you ever considered the
ups and downs that occur
during your day? Speaking
literally, you are up and awake during
the day and lying down sleeping at
night. Speaking figuratively, ups and
downs could mean that you experi-
ence periods of elevated alertness and
arousal compared with your mood when
you are tired or relaxed. Asleep, awake,
aroused, and relaxed are different brain
states, meaning that the brain’s activity
is different during each of these peri-
ods. Scientists have looked deep inside
the brain to understand what sleep is
and how rest differs from being alert.
This research is especially important
for people like doctors, pilots, and shift
workers who sometimes must focus and
make important decisions with very little
sleep. Research on brain states can also
help people who have disorders of sleep,
attention, and learning.
SLEEP
How many hours of sleep do you
get every night? Most people spend
one-third of their lives asleep. While
that might appear to be a lot of time
spent doing nothing, our brains are
active while we rest each night. The
activity in our brains during sleep is
important for brain health and for
solidifying memories.
Most people feel tired and un-
able to focus if they don’t get enough
sleep. In some cases, too little sleep
can impair a person’s driving as much
as drinking alcohol. The long-term ef-
fects of lacking sleep also involve many
health risks. Several studies in humans
have revealed that sleep-deprived
people are at increased risk for a wide
range of health issues including diabe-
tes, stress, obesity, high blood pres-
sure, anxiety, cognitive impairment,
and depression.
CHAPTER
Brain States
9
Brain Facts
6160Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Brain States 9Brain States9
Brain Activity During Sleep
Scientists can measure the brain’s
electrical activity using electroenceph-
alography (EEG). Electrodes attached
to the scalp detect and record the net
electrical activity of hundreds of thou-
sands of cortical nerve cells. When a
neuron is active, ions move in and out
of the cell, altering the electrical charge
across the cell membrane. An EEG de-
tects the net electrical charge produced
when neurons increase and decrease
their activity as a group, in synchrony.
The results are “brain waves” — the
cyclic rising and falling of brain activ-
ity that can be important indicators of
brain function. In sleep studies, scien-
tists now recognize two main states:
slow wave sleep (SWS) and rapid eye
movement sleep (REM).
SWS gets its name from the
high amplitude, low frequency,
brain waves in EEG recordings.
The high amplitude of slow waves
indicates that many cortical neu-
rons are switching their activity in a
synchronized way from a depolarized
(more excitable) state to a hyperpo-
larized (less excitable) state and back
again. These slow waves appear to
be important to sleep function —
the longer a person stays awake, the
more slow waves they will experience
during the SWS state. Slow waves
become less frequent the longer the
person is asleep. If awakened during
SWS, most people recall only frag-
mented thoughts, not active dreams.
Have you ever seen a cat dream-
ing — twitching its whiskers or paws
while it sleeps? Dreaming happens
mainly during REM sleep, which takes
its name from the periodic rapid eye
movements people make in this state.
Brain activity recorded during REM
looks very similar to EEGs recorded
while awake. EEG waves during REM
sleep have much lower amplitudes
than the SWS slow waves, because
neuron activity is less synchronized
— some nerve cells depolarize while
others hyperpolarize, and the “sum” of
their electrical states is less positive (or
negative) than if they acted in synchro-
ny. Paradoxically, the fast, waking-like
EEG activity during REM sleep is ac-
companied by atonia, a loss of muscle
tone causing the body to become tem-
porarily paralyzed. The only muscles
remaining active are those that enable
breathing and control eye movements.
Oddly enough, the neurons of our
motor cortex fire as rapidly during
REM sleep as they do during waking
movement — a fact that explains why
movements like a kitten’s twitching
paws can coincide with dreams.
During the night, periods of
SWS and REM sleep alternate in
90-minute cycles with 75–80 minutes
of SWS followed by 10–15 minutes
of REM sleep. This cycle repeats,
typically with deeper and longer peri-
ods of REM sleep towards morning.
To study sleep disorders, researchers
often use mice that have sleep struc-
tures qualitatively very similar to hu-
mans; however, rodents have shorter
This chart shows the brain waves of an individual being recorded by an EEG machine during a night’s sleep. As the person falls asleep,
the brain waves slow down and become larger. Throughout the night, the individual cycles though sleep stages, including REM sleep,
where brain activity is similar to wakefulness.
and more frequent sleep episodes
lasting 3–30 minutes (sometimes lon-
ger). Rodents also sleep more during
the day and are more active at night.
Compare that to human adults, who
are typically more active during the
day and have one sleep episode at
night lasting about 8 hours.
Sleep Regulation
How does the brain keep us
awake? Wakefulness is main-
tained by the brain’s arousal systems,
each regulating different aspects of the
awake state. Many arousal systems are
in the upper brainstem, where neurons
connecting with the forebrain use the
neurotransmitters acetylcholine,
norepinephrine, serotonin, and
glutamate to keep us awake. Orexin-
producing neurons, located in the
hypothalamus, send projections to the
brainstem and spinal cord, the thala-
mus and basal ganglia, as well as to the
forebrain, the amygdala, and dopa-
mine-producing neurons. In studies of
rats and monkeys, orexin appears to
exert excitatory effects on other arousal
systems. Orexins (there are two types,
both small neuropeptides) increase
metabolic rate, and their production
can be activated by insulin-induced
low blood sugar. Thus, they are
involved in energy metabolism. Given
these functions, it comes as no surprise
that orexin-producing neurons are
important for preventing a sudden
transition to sleep; their loss causes
narcolepsy, as described below. Orexin
neurons also connect to hypothalamic
neurons containing the neurotransmit-
ter histamine, which plays a role in
staying awake.
The balance of neurotransmitters
in the brain is critically important for
maintaining certain brain states. For
example, the balance of acetylcholine
and norepinephrine can affect wheth-
er we are awake (high acetylcholine
and norepinephrine) or in SWS (low
acetylcholine and norepinephrine).
During REM, norepinephrine re-
mains low while acetylcholine is high,
activating the thalamus and neocortex
enough for dreaming to occur; in
this brain state, forebrain excitation
without external sensory stimuli pro-
duces dreams. The forebrain becomes
excited by signals from the REM
sleep generator (special brainstem
neurons), leading to rapid eye move-
ments and suppression of muscle
tone — hallmark signs of REM.
During SWS, the brain systems
that keep us awake are actively sup-
pressed. This active suppression of
arousal systems is caused by the ven-
trolateral preoptic (VLPO) nucleus, a
group of nerve cells in the hypothala-
mus. Cells in the VLPO release the in-
hibitory neurotransmitters galanin and
gamma-aminobutyric acid (GABA),
which can suppress the arousal sys-
tems. Damage to the VLPO nucleus
causes irreversible insomnia.
Sleep-Wake Cycle
Two main factors drive your body
to crave sleep: the time of day or night
(circadian system) and how long you
have been awake (homeostatic system).
The homeostatic and circadian systems
are separate and act independently.
The circadian timing system is
regulated by the suprachiasmatic
nucleus, a small group of nerve cells
in the hypothalamus that functions
as a master clock. These cells express
“clock proteins,” which go through a
biochemical cycle of about 24 hours,
setting the pace for daily cycles of
activity, sleep, hormone release, and
other bodily functions. The master
clock neurons also receive input
directly from the retina of the eye.
Thus, light can reset the master clock,
adjusting it to the outside world’s
day/night cycle — this explains how
your sleep cycles can shift when you
change time zones during travel. In
addition, the suprachiasmatic nucleus
sends signals through different brain
regions, eventually contacting the
VLPO and the orexin neurons in the
lateral hypothalamus, which directly
regulate arousal.
What happens in the brain when
we don’t get enough sleep? The second
system that regulates sleepiness is the
The balance of neurotransmitters
in the brain is critically important for
maintaining certain brain states.
6362Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Brain States 9Brain States9
homeostatic system, which makes you
feel sleepy if you stay awake longer
than usual. One important sleep
factor is a chemical in the brain called
adenosine. When you stay awake for
a long time, adenosine levels in the
brain increase. The increased ade-
nosine binds to specific receptors on
nerve cells in arousal centers to slow
cellular activity and reduce arousal.
Adenosine can increase the number of
slow waves during SWS. As you get
more sleep, adenosine levels fall and
slow waves decrease in number. Caf-
feine acts as a stimulant by binding to
adenosine receptors throughout the
brain and preventing their interaction
with adenosine. As a result, in the
presence of caffeine, fewer receptors
are available for the slowing influence
of adenosine.
People often say they need to
“catch up on sleep.” But can you really
make up for lost sleep? Normally, the
homeostatic and circadian systems
act in a complementary fashion to
produce a normal 24-hour cycle of
sleep and wakefulness. Nonetheless,
activating the brain’s arousal system
can keep us awake even after a long pe-
riod of wakefulness — for example, a
late-night study session to prepare for
an important exam. In normal circum-
stances, the homeostatic system will
respond to the loss of sleep by increas-
ing the duration of ensuing sleep and
increasing the number of slow waves
during the SWS episodes. As noted
above, this rebound slow wave activity
correlates with the previous time spent
awake and is mediated by adenosine.
Sleep Disorders
The most common sleep disorder,
and the one most people are familiar
with, is insomnia. Some people with
insomnia have difficulty falling asleep
initially; others fall asleep, then awak-
en part way through the night and
can’t fall back asleep. Several common
disorders, listed below, disrupt sleep
and prevent people from getting an
adequate amount of sleep.
Daytime sleepiness (not narcolep-
sy), characterized by excessive feelings
of tiredness during the day, has many
causes including sleep apnea (see be-
low). Increased daytime sleepiness can
increase the risk of daytime accidents,
especially car accidents.
Sleep apnea occurs when the air-
way muscles of the throat relax during
sleep, to the point of collapse, closing
the airway. People with sleep apnea
have difficulty breathing and wake up
without entering the deeper stages of
SWS. This condition can cause high
blood pressure and may increase the
risk of heart attack. Treatments for
sleep apnea focus on reducing airway
collapse during sleep; simple changes
that may help include losing weight,
avoiding alcohol or sedating drugs
prior to sleep, and avoiding sleeping
on one’s back. However, most people
with sleep apnea require breathing
machines to keep their airway open.
One such device, called a continuous
positive airway pressure or “CPAP”
machine, uses a small mask that fits
over the nose to provide an airstream
under pressure during sleep. In some
cases, people need surgery to correct
their airway anatomy.
REM sleep behavior disorder
occurs when nerve pathways in the
brain that prevent muscle movement
during REM sleep do not work.
Remember that dreaming happens
during REM sleep, so imagine people
literally acting out their dreams by
getting up and moving around. This
can be very disruptive to a normal
night’s sleep. The cause of REM be-
havior disorder is unknown, but it is
more common in people with degen-
erative neural disease such as Parkin-
son’s, stroke, and types of dementia.
The disorder can be treated with
drugs for Parkinson’s or with a ben-
zodiazepine drug, clonazepam, which
enhances the effects of the inhibitory
neurotransmitter GABA.
FPO
Electroencephalography measures brain activity through sensors placed on the head. It can
record how the brain reacts to all kinds of stimuli and activities, including sleep.
Simon Fraser University.
Narcolepsy: An Example
of Sleep Disorder Research
Narcolepsy is a relatively
uncommon sleep disorder —
only 1 case per 2,000 people in the
United States — in which the brain
lacks the special neurons that help
control the transition into sleep, so
that the regular cycling is disrupted.
People with narcolepsy have sleep
attacks during the day, causing them
to suddenly fall asleep, which is
especially dangerous if they are
driving. The problem is caused by the
loss of orexin neurons in the lateral
hypothalamus. People with narcolep-
sy tend to enter REM sleep very
quickly and may even enter a dream-
ing state while still partially awake, a
condition known as hypnagogic
hallucination. Some people with
narcolepsy also have attacks in which
they lose muscle tone — similar to
what happens in REM sleep, but
while they’re awake. These attacks of
paralysis, known as cataplexy, can be
triggered by emotional experiences
and even by hearing a funny joke.
Recent research into the mech-
anisms of narcolepsy has provided
important insights into the processes
that control the mysterious transitions
between waking, slow wave sleep,
and REM sleep states. Orexin (in the
lateral hypothalamus) is critical for
preventing abnormal transitions into
REM sleep during the day. In one
study, scientists inactivated the gene
for orexin in mice and measured their
sleep patterns. They found that mice
lacking the orexin gene showed symp-
toms of narcolepsy. Similarly, humans
with narcolepsy have abnormally low
levels of orexin levels in their brain
and spinal fluid.
Because orexin levels are disrupt-
ed in narcolepsy, scientists also began
studying neurons that were neighbors
to orexin neurons to see what hap-
pened if the neighboring neurons were
activated in narcoleptic mice. Those
neurons contained melanin-concen-
trating hormone, and stimulating
them (using a technique called opto-
genetics) induced sleep — opposite to
the effect of stimulating orexin neu-
rons. A balance between the activation
of orexin neurons and their neighbor-
ing neurons could control the tran-
sition between waking and sleeping.
These findings will be important in
developing treatments for narcolepsy.
AROUSAL
Think about what happens in
your body and mind when you speak
in front of a crowd — your brain
state is very different from when
you are asleep. Perhaps you notice
changes in your breathing, heart rate,
or stomach. Maybe your thoughts are
racing or panicked. Or maybe you
are energized and excited to perform
for your audience. These are exam-
ples of the complex brain state
called arousal.
Rather than merely being awake,
arousal involves changes in the body
and brain that provide motivations
to do an action — teaching a class,
speaking in public, or focusing your
attention. People experience arousal
daily when searching for food while
hungry, or when talking with other
people (social interaction). Arousal is
also important for reproduction and
for avoiding danger.
The level of arousal varies across
a spectrum from low to high. When
arousal falls below a certain threshold
we can transition from wake to sleep,
for example. But under heightened
arousal, like intense anxiety, we cannot
reach this threshold and we stay awake.
Neurotransmitters
During arousal, the brain must de-
vote resources to specific brain regions,
much as an emergency call center
redirects resources like ambulances
and fire trucks during a fire. Specific
types of neurons in the brain regions
involved in arousal release multiple
neurotransmitters, telling the rest of
the brain and the body to be on alert.
These neurotransmitters are dopamine
(for movement), norepinephrine (for
alertness), serotonin (for emotion),
and acetylcholine and histamine,
which help the brain communicate
with the body to increase arousal.
Sensory Input
While neurotransmitters provide
the internal signals for arousal, external
signals from the outside world — like
the bright lights (visual input) and
cheering crowds (auditory input) at a
stage performance — can also stimu-
late arousal. Sensory input gets sorted
in the brain region called the thala-
mus. Often called a “sensory clearing
house,” the thalamus regulates arous-
al, receiving and processing sensory
inputs from brain regions important
in senses like vision and hearing and
relaying these inputs to the cortex.
Autonomic Nervous System
Once the brain is aroused, what
does the body do? The reticular
activating system, in the brainstem, co-
ordinates signals coming from sensory
inputs and neurotransmitters to make
sense of events in the brain and pass
that information to the rest of the
body. The reticular activating system
specifically controls the autonomic
nervous system, which affects heart
rate, blood flow, and breathing. By
controlling these automatic body pro-
cesses, the reticular activating system
6564Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Brain States 9Brain States9
sets up the physical state of arousal,
bringing important resources like oxy-
gen and nutrients to parts of the body
where they are needed.
Together, the changes that happen
in the brain and body during arous-
al enable us to be alert and focused,
which helps us process information
quickly. Using this information, we
can choose the appropriate emotional
response or physical action for a
given situation.
Sexual Arousal
Several complex brain systems and
endocrine (hormone) systems contrib-
ute to sexual arousal and behaviors, but
the brain regions, neurotransmitters,
and body systems are similar to those
involved in general arousal. The dis-
tinguishing factor is that sexual arousal
also involves hormones such as estrogen
and testosterone, which then activate
neurons that release the same neu-
rotransmitters that are released during
general arousal. Many human and ani-
mal studies report interactions between
sex hormones and neurotransmitters
dopamine, serotonin, GABA, and
glutamate. Researchers have also found
that brain regions such as the hypo-
thalamus, amygdala, and hippocampus
contain many estrogen and progester-
one receptors, and brain regions that
mediate feelings of reward (nucleus
accumbens) and emotions like pleasure
(amygdala) motivate sexual behaviors.
Overall, the primary involvement of sex
hormones is a key in defining the brain
state of sexual arousal.
ATTENTION
If you are paying
attention right now,
there should be detectable changes in
your heart rate, breathing, and blood
flow. If that sounds familiar, it’s
because those same physiological
changes occur during arousal, which
is necessary for being alert and paying
attention. As mentioned previously,
the state of arousal calls for reactions
to the environment. To make deci-
sions about what to do, you need to
focus on what’s happening in the
environment, especially involving
anything relevant to your goals. For
example, if your goal is to run away
from an angry bear, you need to be
alert and pay attention to where
you’re running so you don’t trip and
fall. Scientists have theorized that the
state of arousal speeds processing and
improves comprehension of environ-
mental details. Otherwise, your brain
would need an infinite amount of
time and energy to process all of its
sensory inputs (sounds, sights, smells,
and other feelings), because the
environment is always changing.
Focus
Even with multitasking, it is
impossible for the brain to process
all its sensory inputs. Instead, people
focus their attention on one thing at
a time. Attention is a fascinating abil-
ity, because it enables you to have so
much control and the ability to fine-
tune your focus to different locations,
times, and topics. Consider the page
you are reading right now. Although
you can see the whole page, you focus
on only one line at a time. Alterna-
tively, you can turn your attention
to the past — just minutes ago when
you were reading about arousal. Or
you can ignore the sentences alto-
gether and focus on the number of
times the word “you” occurs on this
page. Scientists recognize two types
of attention, which involve different
brain processes: voluntary (endog-
enous) attention and involuntary
(exogenous) attention.
Voluntary attention happens
when you choose what to focus on —
like finding a loved one in a crowd.
The frontal and parietal cortices of
the brain are active when you control
your attention or direct it towards a
specific object or location. Involun-
tary attention occurs when something
in the environment (like a sudden
noise or movement) grabs your atten-
tion. Involuntary attention is a dis-
traction from your chosen goals and,
in fact, researchers often use distrac-
tor objects in attention experiments.
Distractors can be emotional, like
pictures of family, or non-emotional
images that stand out from other
stimuli, like a red circle surrounded
by gray squares. Brain regions in the
right hemisphere, collectively known
as the ventral frontoparietal network,
form a system that processes new and
interesting stimuli that distract you
from the task at hand. Research on at-
tention can help us understand visual
tasks, learning, child development,
and disorders of attention.
Disorders of Attention
Paying attention for long
periods of time, such as a
3-hour lecture, can be difficult for
many people. For some people, even
focusing for a short time can be hard.
Several disorders that affect the ability
to pay attention are attention deficit
hyperactivity disorder (ADHD),
schizophrenia, prosopagnosia, and
hemineglect syndrome. It may seem
strange to regard schizophrenia as an
attention disturbance, but some
psychiatric studies suggest that it
involves a failure of selective attention.
Prosopagnosia, or face blindness, is a
cognitive disorder in which a person is
unable to recognize faces — even their
own family members. The severity of
this condition varies, and genetic
factors might be involved. Attention
disorders have various causes, but we
will focus on hemineglect syndrome,
caused by damage to the right parietal
cortex, a brain region important in
involuntary attention.
Between 50–82 percent of pa-
tients who suffer stroke in the right
hemisphere experience hemineglect
syndrome, also known as spatial ne-
glect and unilateral neglect. In these
cases, patients with neglect ignore the
left side of their visual field. Some-
times they ignore the left side of the
body and the left side of individual
objects, as well. Diagnosis of hemine-
glect syndrome can be done with a
pen and paper. For example, patients
can be instructed to draw a copy of
a picture like a butterfly or a castle,
and those patients with hemineglect
usually draw only the right half of
the picture or leave out details of the
left side. Research on patients with
hemineglect syndrome contributes to
our understanding of rehabilitation
after stroke, as well as the role of the
right parietal cortex in attention
and perception.
REST: DEFAULT MODE
NETWORK
What is the difference
between being alert
and resting while awake? During times
of rest and relaxation, you’re usually
avoiding heavy thinking or complicat-
ed tasks, and parts of the brain called
the default mode network are more
active. You may think of the default
mode network as a personal lullaby or
a playlist that turns on when you are
ready to relax. Activity of the default
mode network decreases (the lullaby
gets quieter) when you start doing or
thinking about a demanding task.
Human studies using imaging tech-
niques such as functional magnetic
resonance imaging (fMRI) and
positron emission tomography (PET)
have identified which brain regions
belong to the default mode network.
These brain areas, which are involved
in emotion, personality, introspection,
and memory, include frontal brain
regions (ventromedial prefrontal
cortex, dorsomedial prefrontal cortex,
and anterior cingulate cortex), as well
as the posterior cingulate cortex, lateral
parietal cortex, and precuneus.
Although the exact role of the
default mode network is unclear,
the functions of its “participating”
brain regions provide hints about its
purpose. Studies on emotion have
revealed that activity in the ventro-
medial PFC is directly related to
how anxious a subject feels while
performing a task — suggesting that
the default mode network may play a
role in regulating emotion and mood.
Activity in the dorsomedial PFC (a
region involved in self-referential
or introspective thoughts) increases
when a person is at rest and day-
dreaming. The dorsomedial PFC is
also involved in stream-of-conscious-
ness thoughts and thoughts about
oneself in the past, present, or future
(autobiographical self ). The roles of
these regions suggest that the default
mode network may also function
in self-reflection and our sense of
self in time.
The posterior brain regions of the
default mode network (posterior cin-
gulate cortex, lateral parietal cortex,
and precuneus) become more active
when remembering concrete mem-
ories from past experiences. These
brain regions are connected with the
hippocampus, which is important for
learning and forming memories. Both
the hippocampus and the default
mode network are more active when
a person is at rest in the evening and
less active when waking up early in
the day. These patterns indicate that
the default mode network helps to
process and remember the events
of the day.
Future studies using electrical re-
cordings from inside the human brain
can be paired with fMRI to tell us
more about the brain activity patterns
of the default mode network and how
brain regions coordinate their activity
during tasks that utilize the functions
of this network.
Scientists recognize two types of
attention, which involve different brain
processes: voluntary attention and
involuntary attention.
67Brain Factssociety for neuroscience |
The Body in Balance 10
The cells of your body are
immersed in a constantly
changing environment. The
nutrients that sustain them rise and fall
with each meal. Gases, ions, and other
solutes flow back and forth between
your cells and blood. Chemicals bind
to cells and trigger the building and re-
lease of proteins. Your cells digest food,
get rid of wastes, build new tissues,
and destroy old cells. Environmental
changes, both internal and external,
ripple through your body’s physio-
logical systems. One of your brain’s
less-visible jobs is to cope with all these
changes, keep them within a normal
range, and maintain the healthy func-
tions of your body.
The tendency of your body’s tissues
and organ systems to maintain a condi-
tion of balance or equilibrium is called
homeostasis. Homeostasis depends
on active regulation, with dynamic
adjustments that keep the environ-
ment of your cells and tissues relatively
constant. The brain is part of many
homeostatic systems, providing signals
that coordinate your body’s internal
clocks and regulating hormone secre-
tion by the endocrine system. These
functions often involve a region of the
forebrain called the hypothalamus.
CIRCADIAN RHYTHMS
Almost every cell in your body
has an internal clock that tells
it when to become active, when to rest,
and when to divide. These clocks
broker changes in many of the body’s
physiological systems over a 24-hour,
or circadian, period. For example, the
clocks cause faster pulses of peristaltic
waves in your gut during the day and
make your blood pressure dip at night.
But because these clocks are deep
inside your body and cannot detect
daylight, none of them can tell time
CHAPTER
The Body
in Balance
10
on its own. Instead, daily rhythms are
coordinated by the suprachiasmatic
nucleus (SCN), a tiny group of
neurons in the hypothalamus.
Neurons in the SCN act like a met-
ronome for the rest of the body, emit-
ting a steady stream of action potentials
during the day and becoming quiet
at night. The shift between active and
silent states is controlled by cyclic in-
teractions between two sets of proteins
encoded by your body’s “clock” genes.
Researchers first identified clock genes
in the fruit fly Drosophila melanogaster
and studied how they keep time; since
then, a nearly identical set of genes has
been found in mammals. The SCN also
tracks what time it is based on signals
it receives from photoreceptors in the
retina, which keeps its activity in sync
with the Earth’s actual day/night cycle.
That little nudge is very important be-
cause, on their own, clock proteins take
slightly more than 24 hours to complete
a full cycle. Studies of animals deprived
of light have discovered that they go to
sleep and wake up a bit later each day.
An autonomic neural pathway
ties the daily rhythmic activity of the
SCN directly to other clocks in the
body. Neurons in the SCN stimulate an
adjacent region of the brain called the
paraventricular nucleus (PVN), which
in turn sends signals down a chain of
neurons through the spinal cord to the
peripheral organs of the body. You’ve al-
ready learned how signals in part of this
neural pathway stimulate orexin neu-
rons to regulate the body’s sleep/wake
cycle. Related pathways also govern the
secretion of melatonin, a hormone that
influences sleep behaviors. Specifically,
electrical activity originating in the SCN
enters the PVN’s neural network and
sends signals up to the pineal gland, a
small pinecone-shaped gland embedded
between the cerebral hemispheres. The
pineal gland secretes melatonin into the
bloodstream at night. Melatonin binds
to cells in many tissues, and although
it has no direct effect on clock gene
expression in the SCN, its systemic
effects seem to reduce alertness and
increase sleepiness. Light exposure trig-
gers signals that stop melatonin secre-
tion, promoting wakeful behaviors.
Together, these signals keep all the
body’s clocks synchronized to the same
24-hour cycle. Coordinated body clocks
enable your body’s physiological systems
to work together at the right times.
When your body prepares to wake from
sleep, 1) levels of the stress hormone
cortisol peak in the blood, releasing
sugars from storage and increasing
appetite, and 2) core body temperature
begins to drift upwards, raising your
body’s metabolic rate. These events,
synchronized with others, prepare your
body for a new day’s activity.
Desynchronizing the body’s phys-
iological clocks can cause noticeable
and sometimes serious health effects.
You might have experienced a familiar
example of circadian rhythm distur-
bance: jet lag. After crossing many time
zones in a short time period, a person’s
patterns of wakefulness and hunger
are out of sync with day and night.
Exposure to the local day/night cycle
resets the brain and body, but it can
take several days to get fully resynchro-
nized. Circadian rhythms can also be
disturbed by situations like late-shift
jobs or blindness, which decouple nor-
mal daylight signals from wake/sleep
cycles. Long-term circadian disruptions
are associated with health problems
including weight gain, increased rates
of insomnia, depression, and cancers.
HORMONES,
HOMEOSTASIS,
AND BEHAVIOR
Neurons can quickly
deliver the brain’s
messages to precise targets in the body.
Hormones, on the other hand, deliver
messages more slowly but can affect a
larger set of tissues, producing large-
scale changes in metabolism, growth,
and behavior. The brain is one of the
tissues that “listens” for hormonal
signals — neurons throughout the
brain are studded with hormone
receptors — and the brain’s responses
play an important part in regulating
hormone secretion and changing
behaviors to keep the body systems in
Coordinated body clocks enable
your body’s physiological systems
to work together at the right times.
Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
The Body in Balance 10The Body in Balance1068
equilibrium. The brain regions
involved in hormone release are called
the neuroendocrine system.
The hypothalamus oversees the
production and release of many hor-
mones through its close ties to the pi-
tuitary gland. The paraventricular and
supraoptic nuclei of the hypothalamus
send axons into the posterior part of
the pituitary gland; activation of spe-
cific neurons releases either vasopressin
or oxytocin into capillaries within the
pituitary. Both of these molecules act
as neurotransmitters inside the brain,
but they are also hormones that affect
distant tissues of the body. Vasopressin
(also called antidiuretic hormone) in-
creases water retention in the kidneys
and constricts blood vessels (vasocon-
striction). Oxytocin promotes uterine
contractions during labor and milk
release during nursing.
Other hypothalamic regions send
axons to a capillary-rich area above the
pituitary called the median eminence.
When these neurons are activated,
they release their hormones into the
blood. These releasing (and inhibiting)
hormones travel through local blood
vessels to the anterior pituitary, where
they trigger (or inhibit) secretion of
a second specific hormone. Of the
seven anterior pituitary hormones,
five are trophic hormones — these
travel in the bloodstream to stimulate
activity in specific endocrine glands
(thyroid, adrenal cortex, ovaries, etc.)
throughout the body. The remaining
two hormones act on non-endocrine
tissues. Growth hormone stimulates
the growth of bone and soft tissues,
and prolactin stimulates milk produc-
tion by the breasts. Hormones released
from the anterior pituitary influence
growth, cellular metabolism, emotion,
and the physiology of reproduction,
hunger, thirst, and stress.
Many hormones produced by
the pituitary and its target endocrine
glands affect receptors inside the brain
— thus, these hormones can alter
neuronal function and gene transcrip-
tion in the hypothalamus. The effect
is to reduce the amount of hormone
released by the hypothalamus when
those circuits become active. These
negative feedback loops enable precise
doses of hormones to be delivered to
body tissues, and ensure that the hor-
mone levels are narrowly regulated.
One of these three-hormone
cascades regulates reproduction in
mammals. Its underlying pattern is the
same in both sexes: 1) gonadotropin-
releasing hormone (GnRH) from the
hypothalamus makes the anterior pi-
tuitary release 2) luteinizing hormone
(LH) and follicle stimulating hormone
(FSH), which in turn make the gonads
secrete 3) sex hormones and start the
development of mature eggs or sperm.
The neuroendocrine system maintains homeostasis, the body’s normal equilibrium, and
controls the response to stress. The adrenal gland releases the stress hormones norepineph-
rine, epinephrine, and cortisol, which quicken heart rate and prepare muscles for action.
Corticotrophin releasing hormone (CRH) is released from the hypothalamus and travels to the
pituitary gland, where it triggers the release of adrenocorticotropic hormone (ACTH). ACTH
travels in the blood to the adrenal glands, where it stimulates the release of cortisol.
69
Sex hormones, in turn, attach to
receptors in the hypothalamus and an-
terior pituitary and modify the release
of the hypothalamic and pituitary
hormones. However, sex hormones
regulate these feedback loops differ-
ently in males and females.
Male sex hormones induce simple
negative feedback loops that reduce
the secretion of gonadotropin-releas-
ing hormone, luteinizing hormone,
and follicle stimulating hormone. The
interplay among these hormones creates
a repetitive pulse of GnRH that peaks
every 90 minutes. The waxing and wan-
ing of GnRH keeps testosterone levels
relatively steady within body tissues,
maintains male libido, and keeps the
testes producing new sperm each day.
Female feedback patterns are
more complex. Over the course of the
month-long menstrual cycle, female sex
hormones exert both positive and nega-
tive feedback on GnRH, FSH, and LH.
When circulating levels of the
female sex hormones estrogen and
progesterone are low, rising follicle
stimulating hormone levels trigger egg
maturation and estrogen production.
Rising estrogen levels induce luteiniz-
ing hormone levels to rise. As the levels
of female sex hormones rise, they exert
negative feedback on FSH secretion,
limiting the number of eggs that ma-
ture in a month, but positive feedback
on LH, eventually producing the LH
surge that triggers ovulation. After
ovulation, high serum levels of sex hor-
mones again exert negative feedback on
GnRH, FSH, and LH which in turn
reduces ovarian activity. Levels of fe-
male sex hormones therefore decrease,
allowing the cycle to start over again.
Many other hormones are not
regulated by the pituitary gland,
but are released by specific tissues in
response to physiological changes. The
brain contains receptors for many of
these hormones but, unlike pituitary
hormones, it does not directly regulate
their secretion. Instead, when these
hormones bind to receptors on neu-
rons, they modify the output of neural
circuits, producing behavioral changes
that have homeostatic effects. One
example of this is a pair of hormones
called leptin and ghrelin.
Leptin and ghrelin change eating
behavior by regulating food intake
and energy balance. Both hormones
affect hunger, and both are released
in response to changes in an animal’s
internal energy stores. However, they
have different effects on the circuits
they regulate. Ghrelin keeps the
body fed. Released by the wall of the
gastrointestinal tract when the stom-
ach is empty, ghrelin activates hunger
circuits in the hypothalamus that drive
a search for food. Once the stomach is
full, ghrelin production stops, reduc-
ing the desire to eat. In contrast, leptin
helps maintain body weight within a
set range. Leptin is produced by fat
cells and is released when fat stores are
large. When it binds to neurons in the
hypothalamus, leptin suppresses the
activity of hunger circuits and reduces
the desire to eat. As fat stores are used
up, leptin levels decline, driving be-
havior that makes an animal eat more
often and replenish its fat stores.
STRESS
Your body reacts in stereotyped
ways when you feel threatened. You
breathe faster, your heartbeat speeds
up, your muscles tense and prepare
for action. These reactions may have
helped our ancestors run from preda-
tors, but any stressful situation — ar-
guing with your parents, a blind date, a
looming deadline at work, abdominal
cramps, discovering your apartment
was robbed, trying karaoke for the first
time — has the potential to set them
off. Scientists call this reaction the stress
response, and your body turns it on to
some degree in response to any external
or internal threat to homeostasis.
The Stress Response
The stress response weaves togeth-
er three of the brain’s parallel com-
munication systems, coordinating the
activity of voluntary and involuntary
nervous systems, muscles, and metabo-
lism to achieve one defensive goal.
Messages sent to muscles through
the somatic (voluntary) nervous system
prime the body to fight or run from
danger (the fight-or-flight response).
Messages sent through the autonomic
(involuntary) nervous system redirect
nutrients and oxygen to those mus-
cles. The sympathetic branch tells the
adrenal medulla to release the hor-
mone epinephrine (also called adren-
aline), which makes the heart pump
faster and relaxes the arterial walls that
supply muscles with blood so they can
respond more quickly. At the same
time, the autonomic system’s parasym-
pathetic branch restricts blood flow
to other organs including the skin,
gonads, digestive tract, and kidneys.
Finally, a cascade of neuroendocrine
hormones originating in the hypothal-
amus and anterior pituitary circulates
in the bloodstream, affecting processes
like metabolic rate and sexual func-
tion, and telling the adrenal cortex to
release glucocorticoid hormones —
like cortisol — into the blood.
Glucocorticoid hormones bind to
many body tissues and produce wide-
spread effects that prepare the body to
respond to potential threat. These hor-
mones stimulate the production and
release of sugar from storage sites such
as the liver, making energy available to
Brain Facts society for neuroscience|
The Body in Balance1070
muscles. They also bind to brain areas
that ramp up attention and learning.
And they help inhibit nonessential
functions like growth and immune
responses until the crisis ends.
It’s easy to imagine how (and why)
these physiological changes make your
body alert and ready for action. But
when it comes to stress, your body can’t
tell the difference between the danger
of facing down a bull elephant and the
frustration of being stuck in traffic.
When stress is chronic, whatever its
cause, your adrenal glands keep pump-
ing out epinephrine and glucocorti-
coids. Many animal and human studies
have shown that long-term exposure to
these hormones can be detrimental.
Chronic Stress
Overexposure to
glucocorticoids can
damage a wide range of physiological
systems. It can cause muscles to
atrophy, push the body to store energy
as fat, and keep blood sugar abnormal-
ly high — all of these can worsen the
symptoms of diabetes. Overexposure
to glucocorticoids also contributes to
the development of hypertension (high
blood pressure) and atherosclerosis
(hardening of the arteries), increasing
the risk of heart attacks. Because the
hormones inhibit immune system
function, they also reduce resistance to
infection and inflammation, some-
times pushing the immune system to
attack the body’s own tissues.
Chronic stress can also have specif-
ic negative effects on brain tissue and
function. Persistently high levels of
glucocorticoids inhibit neuron growth
inside the hippocampus, impairing the
normal processes of memory forma-
tion and recall. Stress hormones can
also suppress neural pathways that are
normally active in decision-making
and cognition, and speed the deteri-
oration in brain function caused by
aging. They may worsen the damage
caused by a stroke. And they can
lead to sleep disorders — cortisol is
also an important wakeful signal in
the brain, so the high cortisol levels
due to chronic stress may delay sleep.
Stress-induced insomnia can then start
a vicious cycle, as the stress of sleep
deprivation leads to the release of even
more glucocorticoids.
The effects of chronic stress may
even extend beyond a single indi-
vidual, because glucocorticoids play
important roles in brain development.
If a pregnant woman suffers from
chronic stress, the elevated stress hor-
mones can cross the placenta and shift
the developmental trajectory of her
fetus. Glucocorticoids are transcription
factors, which can bind to DNA and
modify which genes will be expressed
as proteins. Studies with animal mod-
els have shown that mothers with high
blood levels of glucocorticoids during
pregnancy often have babies with low-
er birth weights, developmental delays,
and more sensitive stress responses
throughout their lives.
Because metabolic stressors such as
starvation induce high glucocorticoid
levels, it’s been suggested that these
hormones might help prepare the fetus
for the environment it will be born
into. Tough, stressful environments
push fetuses to develop stress-sensitive
“thrifty” metabolisms that store fat eas-
ily. Unfortunately, these stress-sensitive
metabolisms increase a person’s risk of
developing chronic metabolic diseases
like obesity or diabetes, especially if they
subsequently grow up in lower-stress
environments with plentiful food.
The effects of stress can even be
passed to subsequent generations by
epigenetic mechanisms. Chronic stress
can change the markers on DNA
molecules that indicate which of the
genes in a cell are expressed and which
are silenced. Some animal studies
indicate that when changes in markers
occur in cells that develop into eggs or
sperm, these changes can be passed on
and expressed in the animal’s offspring.
Further research might reveal wheth-
er chronic stress has similar effects
in humans, and whether inheriting
silenced or activated genes contributes
to family histories of cancer, obesity,
cardiovascular, psychiatric, or neurode-
velopmental disease.
Chronic stress can also have
specific negative effects on brain
tissue and function.
AUTISM SPECTRUM
DISORDERS
Autism is often considered a
childhood condition, although
many of its symptoms persist lifelong.
Some people with autism also have
mood and anxiety disorders, seizures,
intellectual disability, attention deficit
hyperactivity disorder (ADHD), and
obsessive-compulsive disorder (OCD).
However, more than 40 percent of
people with autism have normal or
above-average intelligence. With
symptoms that range from mildly to
severely disabling, autism is considered
a spectrum. Autism spectrum disorders
(ASD) are diagnosed based on two
main criteria: impaired social commu-
nication and interaction, and repetitive
behaviors or narrow, obsessive inter-
ests. For example, some people on the
autism spectrum are unable to speak,
while others are socially awkward but
highly articulate. Many adults with an
autism diagnosis think of their autism
as a strength — enabling or motivating
them to develop deep expertise in an
area or a different perspective on the
world — rather than a disorder that
needs to be cured.
Currently, 1 of every 68 American
8-year-olds is estimated to meet the
diagnostic criteria for an autism spec-
trum disorder. The prevalence of ASD
has risen dramatically since the 1970s,
but it is unclear whether changes to
diagnostic criteria and wider recogni-
tion of ASD have contributed to the
increase in diagnoses.
Four to five times more boys
than girls are diagnosed with autism,
although it is not clear whether some
of that pattern is because of underdi-
agnosis of girls. Environmental factors
such as parents having children later in
life, fever and infection during preg-
nancy, and premature birth have been
CHAPTER
Childhood
Disorders
11
71Brain Facts
7372Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Childhood Disorders 11Childhood Disorders11
linked to an increased risk of autism
in children. A huge number of studies
have found no connection between
childhood vaccination and the increase
in autism diagnoses.
Autism is believed to be at least
partially driven by genetics, but how
do scientists know that for sure? One
low-tech approach uses twin studies:
If one of a pair of identical twins
receives an autism diagnosis, the other
twin has greater than a 50 percent
chance of also being diagnosed with
ASD. Children who have an older
sibling on the spectrum also have a
higher likelihood of being diagnosed
with autism — nearly one in five also
receives a diagnosis of ASD.
The genetics of autism is very
complicated in most cases, involving
dozens (or more) of genes, leading
to a unique condition in nearly
every person. Recently, however,
high-throughput genomic analyses
have broadened the pool of potential
genes, revealed their roles in the body,
and suggested possible new therapies.
It appears that many genes, each
with a small effect, contribute to the
inheritance of most ASDs. But such
small effects make these genes hard to
identify in genome-wide association
studies. Scientists are now looking at
the rare variants associated with ASD.
These afflict fewer people with ASD,
but their effects are larger and easier to
detect. Some of these rare mutations
are in single genes whose impairment
is already known to cause intellectu-
al disability and social dysfunction.
These genes include FMR1 (codes for
fragile X mental retardation protein,
but its non-mutant form is needed
for normal cognitive development);
PTEN (codes for a tumor suppressor
enzyme that regulates cell division,
so cells don’t divide or grow too fast);
and TSC1 or TSC2 (tuberous sclerosis
complex 1 and 2), which also code for
proteins that help control cell growth
and size. Between 50 to 60 percent of
people with fragile X syndrome and
approximately 40 percent of people
with tuberous sclerosis complex have
ASD. Children with a variant of the
gene NF-1 develop tumors in child-
hood (neurofibromatosis) and a 2011
study found that nearly 10 percent
met the criteria for autism.
Intriguingly, these ASD-related
genes influence a major signaling
pathway for regulating cell metabo-
lism, growth, and proliferation, the
mTOR pathway. This suggests a very
real potential for treating autism with
drugs that target the mTOR pathway.
For example, mouse models with
mutations in PTEN show traits simi-
lar to humans with these gene vari-
ants: altered sociability, anxiety, and
repetitive behaviors. These behaviors
can be relieved or reversed by drugs
that inhibit the mTOR pathway.
Clinical trials of these drugs (rapamy-
cin and lovastatin) are underway.
Despite this progress, autism
genetics is so complicated that it can’t
be used to diagnose the condition.
And unlike diabetes, kidney disease, or
thyroid disease, there are no biochem-
ical or other biomarkers of autism.
Currently, autism diagnosis is based on
behavioral analysis, but efforts are un-
derway to use more objective criteria
such as tracking eye movements and
functional neuroimaging, which can
even be done in infants.
How early can autism be detect-
ed? Parents often notice develop-
mental issues before their child’s first
birthday, and autism can be reliably
diagnosed based on behavioral
characteristics at age 2. Despite these
possibilities for early detection, most
American children aren’t diagnosed
until they’re about 4½ years old. With
evidence mounting that interventions
are more effective the earlier they be-
gin, researchers are hoping that more
objective measures will enable earlier
diagnoses and interventions.
Although the molecular caus-
es and characteristics of autism are
unclear, it appears that the condition
results from unusual cellular develop-
ment within the cerebral cortex — a
brain region that is crucial to mem-
ory, attention, perception, language,
and other functions. Both white
and gray matter of the brain show
consistent, but subtle, alterations in
people with ASD. Long-term studies
also have found that a minority of
children on the autism spectrum have
abnormally large brain volumes and
faster brain growth. Other toddlers
with autism have shown unusual
development and network inefficien-
cies at the back of the cerebral cortex.
There is evidence that some atypical
activity occurs in the cortex of people
with ASD from older childhood into
adulthood, and information might
not be integrated in the usual way
across distributed brain networks.
At this point, no medications have
been proven to reverse autism. Some
people get symptomatic relief from
drugs designed for other uses, such as
anxiety conditions, and several stud-
ies have reported social benefits from
treatment with oxytocin — a hormone
known to improve social bonding —
but the findings have been mixed. For
this challenging disorder, behavioral
therapies are still the only proven treat-
ments for autism, and early interven-
tions are the most effective.
ATTENTION DEFICIT
HYPERACTIVITY DISORDER
Attention deficit hyperactivity
disorder (ADHD) is one of the most
commonly diagnosed childhood
conditions. In 2014, approximately
11 percent of American parents with
a child between the ages of 4 and 17
reported that their son or daughter
had received an ADHD diagnosis. In
at least 30 percent of those diagnosed
with ADHD, the disorder continues
into adulthood.
ADHD is usually characterized by
inattentiveness, as well as hyperactivity
or impulsive behaviors. Although all
young children can be hyperactive,
impulsive, and inattentive from time
to time, these symptoms are more
extreme and last longer in children
with ADHD. They often struggle to
form strong friendships, and their
grades in school can reflect their
behavior instead of their academic
ability. Executive functions, such as
finishing what they start, remembering
to bring homework back to school,
and following multistep directions, can
be especially challenging for those with
ADHD. Young people with ADHD
also have lower rates of high school
graduation and a higher risk of suicide.
No objective diagnostic test exists
for ADHD, so diagnosis requires a
comprehensive evaluation, including
a clinical interview and parent and
teacher ratings. Because problems
with attention and hyperactivity can
be caused by other conditions such as
depression, sleep issues, and learning
disorders, careful evaluation is always
needed to determine whether ADHD
is truly the cause of the symptoms. To
warrant an ADHD diagnosis, atten-
tion and behavioral problems must be
severe enough that they interfere with
normal functioning. In addition, the
behavioral issues must be present in
more than one context — not only at
home or at school, but in both settings.
Although ADHD tends to run in
families, no well-defined set of genes
is known to be responsible for the
condition. Environmental risk factors,
such as extreme early adversity, expo-
sure to lead, and low birthweight, can
also be involved. People with ADHD
do not demonstrate any obvious brain
alterations, but research has found that
people with ADHD might have dif-
ferences in the structure of brain cells
and in the brain’s ability to remodel
itself. Some people with ADHD show
unusual activity in brain cells that re-
lease dopamine, a chemical messenger
involved in rewarding behavior.
ADHD has no cure, but treat-
ments include drugs, behavioral
interventions, or both. Interestingly,
ADHD medications include stimu-
lants such as methylphenidate, as well
as newer, non-stimulant drugs. The
drugs are available in long-acting for-
mulations so children do not have to
interrupt the school day to take their
medication. Determining the right
drug and the right dose might require
a period of experimentation and sup-
port from a specialist, since dosage is
adjusted to how fast a child metaboliz-
es the drug, and to minimize the side
effects. Nevertheless, most children
with ADHD are diagnosed and treated
by their pediatricians. Effective behav-
ioral treatments include organizational
support, exercise, and meditation.
DOWN SYNDROME
Down syndrome is named for the
English physician who first described
it in 1866, but nearly 100 years passed
before scientists determined what
caused the condition: possessing an
extra copy of all or part of the 21st
chromosome. People with this syn-
drome have three copies of this genetic
material, instead of two. In some cases,
the extra copy, or trisomy, does not
occur in every cell, producing what’s
known as mosaicism. Currently, about
250,000 people in the United States
are living with Down syndrome.
There is no clear cause of the
genetic glitch, although maternal age
is a major risk factor for Down syn-
drome. Mothers older than 40 are 8.5
times more likely to have a child with
Down syndrome than mothers aged
20 to 24. Advanced paternal age has
also been linked to higher incidence
of Down syndrome.
The genetics of autism is very
complicated in most cases, involving
dozens of genes, leading to a unique
condition in nearly every person.
7574Brain Factssociety for neuroscience |Brain Facts society for neuroscience|
Childhood Disorders 11Childhood Disorders11
Since late 2011, fetuses can be
screened for Down syndrome using the
mother’s blood. In the past, the risk of
test procedures meant that only older
mothers (whose likelihood of having a
Down syndrome child was known to
be higher) should be screened. Younger
mothers didn’t know until delivery
whether their child would have Down
syndrome. The new blood test, unlike
amniocentesis and chorionic villus
sampling, poses no risk to the baby, so
it can also be used for younger moth-
ers whose chance of having a child
with Down syndrome is quite small.
Children born with Down syn-
drome have distinctive facial features,
including a flattened face and bridge
of the nose, eyes that slant upward,
and small ears. They usually have small
hands and feet, short stature, and poor
muscle tone as well. The intellectual
abilities of people with Down syndrome
are typically low to moderate, although
some graduate from high school and
college, and many successfully hold
jobs. Other symptoms of Down
syndrome can include hearing loss and
heart defects, and virtually everyone
born with Down will develop early-on-
set Alzheimer’s disease, often in their
40s or 50s. Chromosome 21 contains
the gene that encodes amyloid precur-
sor protein (APP), an Alzheimer’s dis-
ease risk factor, and possessing an extra
copy of this gene might cause the early
onset of this fatal disease. Interestingly,
people with mosaic Down syndrome
seem to have milder symptoms and are
more likely to live past 50.
There is no real treatment for
Down syndrome, nor any clear expla-
nation of what occurs in the brain.
Poor connections among nerve cells in
the hippocampus, the part of the brain
involved in memory (and the first brain
area affected by Alzheimer’s disease),
are believed to be a key factor in brain
or intellectual differences in Down syn-
drome. Dysfunction in the mitochon-
dria, the cell’s power plants, might also
play a role in development of related
disorders that involve energy metabo-
lism, such as diabetes and Alzheimer’s.
Scientists have grown stem cells
from fetuses with Down syndrome and
used them to test potential treatments
and confirm which molecular path-
ways are involved in the condition. In
one such laboratory study, researchers
took a gene that normally inactivates
the second X chromosome in female
mammals and spliced it into a stem cell
that had three copies of chromosome
21. In these cells, the inactivation gene
muted the expression of genes on the
extra chromosome 21, believed to con-
tribute to Down syndrome. Although
this is a long way from any clinical ap-
plications, the model is being used to
test the changes and cellular problems
that occur with the tripling of the 21st
chromosome, in hopes of eventually
finding a treatment.
DYSLEXIA
Dyslexia is the most common
and best-studied of the
learning disabilities, affecting as many
as 15 to 20 percent of all Americans.
People with dyslexia have a pro-
nounced difficulty with reading
despite having normal intelligence,
education, and motivation.
Symptoms include trouble with
pronunciation, lack of fluency, diffi-
culty retrieving words, poor spelling,
and hesitancy in speaking. People with
dyslexia might need more time to
respond orally to a question and might
read much more slowly than their peers.
Dyslexia is usually diagnosed in elemen-
tary school, when a child is slow to read
or struggling with reading. Although
reading skills and fluency can improve,
dyslexia persists lifelong.
Deciphering printed letters and
words and recalling their sounds and
meaning involves many areas of the
brain. Brain imaging studies indicate
these areas can be less well connected
in people with dyslexia. One of these
areas is a region on the left side of the
brain called the “word-form area,”
which is involved in the recognition of
printed letters and words. People with
dyslexia also show less brain activity in
the left occipitotemporal cortex, which
is considered essential for skilled read-
ing. Researchers believe that the brain
differences are present before the read-
ing and language difficulties become
apparent — although it is possible
that people with dyslexia read less and,
therefore, their brains develop less in
regions associated with reading. Those
with dyslexia appear to compensate for
reduced activity on the left side of the
brain by relying more heavily on the
right side.
Genetic analyses have revealed a
handful of susceptibility genes, with
animal models suggesting that these
genes affect the migration of brain
cells during development, leading to
differences in brain circuitry. Dyslexia
runs in families, with roughly half of
dyslexics sharing the condition with
a close relative. When one twin is
diagnosed with dyslexia, the second
twin is found to have the condition
55-70 percent of the time. But the
genetics of dyslexia is complex, and
likely involves a wide range of genes
and environmental factors.
Treatment for dyslexia involves
behavioral and educational interven-
tion, especially exercises like breaking
words down into sounds and linking
the sounds to specific letter patterns.
Some researchers use a child’s ability
to rapidly and automatically name
things as an early indicator of dyslexia.
This rapid automatic naming, and the
ability to recognize and work with the
sounds of language, are often impaired
in people with dyslexia. Both skills can
be used in preschoolers and kinder-
gartners to predict their later reading
skills. Research suggests that treat-
ments targeting phonology, as well as
multiple levels of language skills, show
the greatest promise.
EPILEPSY
If someone has two or more
seizures that cannot be explained by a
temporary underlying medical condi-
tion such as a high fever or low blood
sugar, their medical diagnosis will be
“epilepsy” — from the Greek words
meaning to “seize,” “attack,” or “take
hold of.” About 1 percent of Ameri-
can children and 1.8 percent of adults
have been diagnosed with this brain
disorder. Seizures result from irregular
activities in brain cells that can last
five or more minutes at a time. Some
seizures look like staring spells, while
others cause people to collapse, shake,
and become unaware of what is going
on around them. The pattern of symp-
toms and after-seizure brain recordings
using EEGs are used to distinguish
between different types of epilepsy and
determine whether the true cause of
the seizures is epilepsy or a different
medical condition.
Seizures are classified by where
they occur in the brain. General-
ized seizures affect both sides of the
brain. They include absence or petit
mal seizures, which can cause rapid
blinking or a few seconds of staring
into space, and tonic-clonic or grand
mal seizures, which can make some-
one fall, have muscle spasms, cry out,
and/or lose consciousness. Focal or
partial seizures are localized to one
area of the brain. A simple focal sei-
zure can cause twitching or a change
in sensation, triggering strange smells
or tastes. Complex focal seizures can
leave a person confused and unable
to answer questions or follow direc-
tions. A person can also have so-called
secondary generalized seizures, which
begin in one part of the brain but
spread to become generalized seizures.
In some patients with severe epilepsy,
multiple types of seizure can occur at
the same time.
Epilepsy has many possible causes
and thus is considered a spectrum
rather than a single disorder. Causes
include premature birth, brain trauma,
and abnormal development due to
genetic factors. Attributes of epilepsy
patients such as head size, movement
disorders, and family history suggest
that genetics is involved.
Seizures can also accompany or
cause intellectual or psychiatric prob-
lems. For example, some seizures may
suppress the growth of dendrites, leav-
ing the person emotionally unsettled
or less able to learn.
Treatments for epilepsy are direct-
ed toward controlling seizures with
medication or diet. For most patients,
a single medication is enough to
control seizures, although a significant
minority cannot get adequate control
from drugs. About half of epilepsy pa-
tients, particularly those with general-
ized epilepsy, can reduce their seizures
by eating a ketogenic diet, which relies
heavily on high-fat, low-carbohydrate
foods, although it’s unclear why this
diet is effective. For severe cases that
are not relieved by medication, doctors
might recommend surgery to remove
or inactivate the seizure-initiating part
of the brain. In the most severe cases,
if one side of the brain triggers sei-
zures on the other side, surgeons may
perform “split-brain surgery,” cutting
the corpus callosum, a thick band of
white matter that connects the two
sides of the brain. Once their seizures
are controlled, people with epilepsy
can resume their normal lives.
Epilepsy has many possible causes
and thus is considered a spectrum
rather than a single disorder. |
You can only respond to the prompt using information in the context block. Give your answer in bullet points. If you cannot answer using the context alone, say "I cannot determine the answer to that due to lack of context" | What actions did the UN Secretary General, U Thant, take in response to the trial of Sheikh Mujibur Rahman in August 1971? | See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/381796770
The United Nations' Involvement in Bangladesh's Liberation War: A Detailed
Analysis
Article in International Journal of Politics & Social Sciences Review (IJPSSR) · June 2024
CITATIONS
0
3 authors, including:
Md. Ruhul Amin
Comilla University
33 PUBLICATIONS 9 CITATIONS
SEE PROFILE
All content following this page was uploaded by Md. Ruhul Amin on 28 June 2024.
The user has requested enhancement of the downloaded file.
ISSN 2959-6467 (Online) :: ISSN 2959-6459 (Print)
ISSN 2959-6459 (ISSN-L)
Vol. 3, Issue I, 2024 (January – June)
International Journal of Politics & Social Sciences Review
(IJPSSR)
Website: https://ijpssr.org.pk/ OJS: https://ojs.ijpssr.org.pk/ Email: [email protected]
Page | 10
The United Nations' Involvement in Bangladesh's Liberation War: A Detailed Analysis
Md. Firoz Al Mamun 1
, Md. Mehbub Hasan 2 & Md. Ruhul Amin, PhD 3
1 Assistant Professor, Department of Political Science, Islamic University, Kushtia, Bangladesh
2 Researcher and Student, Department of Government and Politics, Jahangirnagar University, Savar, Dhaka1342
3
(Corresponding Author), Associate Professor, Department of Public Administration, Comilla University,
Cumilla, Bangladesh
Abstract
Liberation War, Bangladesh, United Nations, International Intervention, Conflict
Resolution.
Introduction
The 1971 Bengali nation's armed struggle for independence took on an international dimension; as the
conflict came to an end, India and Pakistan got directly involved, and the major powers and their
powerful allies started to actively compete with one another to establish an independent state of
Bangladesh. This effort included international and multinational aspects in addition to bilateral and
regional forms (Jahan, 2008:245). The bigger forum in this instance, where the major powers and
stakeholders participated in various capacities, was the UN. The major powers usually agree on
decisions made and carried out by the United Nations, a global institution. The decision-making
process is primarily a reflection of how the major powers see a given situation. The UN Security
Council may reach an impasse, in which case the General Assembly may adopt certain restricted
actions. Everything that occurred in 1971 took place during the Bangladesh crisis (Matin, 1990: 23).
With Bangladesh's ascent on December 16, the subcontinent's map underwent a reconfiguration.
Furthermore, the United Nations' involvement in these matters has primarily been restricted to
humanitarian efforts and relief activities. The Pakistan military attempted to stifle the calls for
freedom of the people of East Pakistan by genocide and ethnic oppression, which was thwarted by the
On March 26, 1971, the Bangladeshi independence struggle against domestic imperialism and
ethnic discrimination in Pakistan got underway. March 26, 1971, saw the start of the
Bangladeshi independence movement against domestic imperialism and ethnic discrimination
in Pakistan. The United Nations gave relief and humanitarian activities first priority starting in
the Liberation War and continuing until November. The UN Security Council was called in
when India and Pakistan entered the Liberation War on December 3. The Security Council
meetings continued as different suggestions and counterproposals were presented. In the
Security Council, there was a clash between the USSR and US. While the USSR helped
Bangladesh, China and the US helped Pakistan. Keeping their positions neutral, France and
Britain did not cast votes in the Security Council. The Security Council could not therefore
come to an agreement. On December 6, after discussion and an official decision, the Security
Council sent the agenda to the General Assembly. On December 7, a resolution headed "Unity
Formula for Peace" was overwhelmingly approved at the General Assembly. As India and
Bangladesh rejected this idea, the US called a second Security Council session. Sessions of the
Security Council were held at various intervals between December 12 and 21. Everything
changed dramatically when Bangladesh gained its independence on December 16. The
protracted Bangladesh war was essentially resolved on December 21 when the Security
Council unanimously approved a ceasefire resolution.
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 11
establishment of Bangladesh, under the standard pretexts of national integrity, internal affairs, etc. For
this reason, it is plausible to argue that Bangladesh's establishment following the dissolution of the
post-World War II state structure was a highly justifiable event. Following its declaration of
independence, Bangladesh joined a number of UN bodies in 1972 and attained full membership status
in 1974 (Hussain, 2012:189). There is a dearth of scholarship on the United Nations' involvement in
the Great War of Liberation. The discussion research is highly significant, and the author has logically
made a concentrated effort to examine and unearth material on the role of the organization in charge
of maintaining world peace and security throughout the Great War of Liberation.
Research Methodology
The article titled 'The Role of the United Nations in the Great Liberation War of Bangladesh: An
Analysis' has all of the basic aspects of social research. Data was gathered from secondary sources for
research purposes. The research was done using both qualitative and quantitative methods. The
research paper titled 'Role of the United Nations in the Great Liberation War of Bangladesh - An
Analysis' was analyzed using the 'Content Analysis Methodology'. Basically, the study effort was
done using secondary sources to acquire and analyze data and information.
The research relies on secondary sources, either directly or indirectly. The study was done by
gathering information from worldwide media coverage, UN documents, publications, research papers,
reports, archives relating to the liberation war, and records housed in the museum during Bangladesh's
War of Liberation (1971).
The Role of the United Nations in the early stages of the Liberation War
All UN employees were evacuated from Dhaka on March 25, 1971, the night the Pakistani armed
forces declared the liberation war through "Operation Searchlight." But it has not moved to halt the
atrocities against human rights and genocide in East Pakistan. On April 1st, nonetheless, the Secretary
General sent an emergency humanitarian offer to the Pakistani government for the inhabitants of East
Pakistan. Nevertheless, the Pakistani government turned down the offer of humanitarian assistance
and even forbade the Red Cross relief aircraft from landing in Dhaka (Hossein, 2012: 150). President
Yahya Khan gave the UN authorization to carry out rescue operations after the UN Secretary General
appealed to the Pakistani government on April 22 for immediate humanitarian aid. Beginning on June
7, 1971, the United Nations started assistance efforts in East Pakistan. The acronym UNROD stood
for the United Nations Relief and Works Agency for East Pakistan. United Nations recognized the
name "Bangladesh" on December 21 and dubbed the rescue agency "UNROD" (Time Magazine,
January 1, 1971). The surge of refugees entering India on April 23 was the reason the Indian
government made its first plea for outside assistance since the start of the liberation struggle.
Coordination in this respect was taken up by the United Nations High Commissioner for Refugees
(UNHCR). Other than UNHCR, UNICEF and WFP are involved in Indian refugee camps actively.
The World Bank estimates that the Indian government spent $1 billion on refugees overall up to
December, of which just $215 million came from UN assistance. By far the biggest airlift in UN
history (International Herald Tribune, July 8, 1971). India's committed and received monies from the
UN and other sources up to June were:
International Aid to India (June, 1971)
United Nation Other Sources Total
9,80,00,000 16,50,00,000 26,30,00,000
Source: International Herald Tribune, 8 July, 1971.
United Nations product aid to India
Topics Quantity
1. Food Aid 6267 tons
2. Vehicles 2200 piece
3. Medical supplies 700 tons
4. Polythene for making shelters What is needed for 3 million refugees
Source: Rahman, Hasan Hafizur (ed.) (1984) Bangladesh Liberation War Documents, Volume- 13,
Dhaka: Ministry of Liberation War Affairs, Government of the People's Republic of Bangladesh, page
783-87.
Though the UN participated in the relief effort, until September, no talks on matters like the
liberation struggle in Bangladesh, genocide, abuses of human rights, etc. were held in the UN. Even
Bangladesh was left from the September UN Annual General Discussion agenda. Still, throughout
their statements, the leaders of several nations brought up Bangladesh.
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 12
Proposal for deployment of United Nations observers in East Pakistan
Early in the Liberation War, India asked the UN to step in and handle the refugee crisis and put an end
to the genocide in East Pakistan. Yet, at first, Pakistan opposed the UN's intervention in the refugee
crisis, viewing any UN action as meddling in its domestic affairs (Hasan, 1994: 251–53). However,
Yahya Khan consented to embrace all UN measures as of May on US advice. Pakistan started
participating actively in diplomatic efforts in a number of UN forums during this period, with support
from Muslim nations and the United States.
In order for India to be compelled to cease aiding Bangladesh's independence movement as a
result of UN pressure. Acknowledging this, India vehemently objected to the UN's political role,
which it concealed behind humanitarian endeavors. Despite the fact that the UN Secretary General has
mostly been mute on ending genocide and breaches of human rights since the start of the Liberation
War, on July 19 he suggested that "UN peacekeepers or observers be deployed on the India-Pakistan
border to resolve the refugee problem." But the UN Secretary General's plan to send out troops or
monitors was shelved after the Mujibnagar administration and India turned down this offer (Hossein,
2012: 87).
According to Article 99 of the United Nations, the initiative of the Secretary General
The UN Secretary-General, U Thant, submitted a memorandum under Article 99 to the president of
the Security Council and member nations on July 20, 1971, the day following the request for the
deployment of observers. There were eight paragraphs or suggestions in the Secretary General's letter.
"Obviously, it is for the members of the Security Council themselves to decide whether such
consideration should be taken place formally or informally, in public or private," he stated in the note
(UN Doc, A/8401).
India, the primary backer of Bangladesh's independence movement, was put in a humiliating
position by the Secretary General's suggestions. The Soviet Union supported India in this
circumstance. India's principal foreign benefactor in the wake of the Soviet-Indian alliance's signature
was the Soviet Union. The Soviet Union asked the Secretary-General on August 20th not to call a
meeting of the Security Council to discuss the East Pakistan issue. As a result, the Security Council
did not meet, even on the Secretary General's suggestion. Major nations and interested parties
maintained their diplomatic efforts in anticipation of the United Nations General Assembly's 26th
session, which is scheduled to take place on September 21 (The Year Book of World Affairs, 1972).
United Nations Intervention in the Question of Bangabandhu's Trial
Sheikh Mujibur Rahman is set to face trial for treason in the final week of July, as reported by many
media sources. The Mujibnagar government promptly raised alarm following the publication of this
news. Sheikh Mujib is the unquestionable leader of Bangladesh's liberation movement. Consequently,
the Mujibnagar government formally requested the international community and influential nations to
ensure the safety and well-being of Sheikh Mujib's life (Joy Bangla, July 30, 1971). The trial of
Sheikh Mujibur Rahman commenced on August 9, 1971, under the authority of the Pakistani
government. On August 10, U Thant, the Secretary General of the United Nations, intervened in the
Pakistani military junta's attempt to bring Sheikh Mujibur Rahman to trial. The Secretary General
stated clearly that the topic at hand is highly sensitive and delicate, and it is the responsibility of the
legal system of Pakistan, as a member state, to handle it. It is also a subject of great curiosity and
worry in several spheres, encompassing both humanitarian and political domains. The Secretary
General has been regularly receiving expressions of grave concern from government representatives
regarding the situation in East Pakistan. It is widely believed that unless some form of agreement is
reached, the restoration of peace and normalcy in the region is unlikely. The Secretary General
concurs with several members that any advancements about the destiny of Sheikh Mujibur Rahman
would undoubtedly have repercussions beyond the borders of Pakistan. The article is from The
International Herald Tribune, dated August 10, 1971.
Delegation of Bangladesh to the United Nations
The United Nations General Assembly meets every September. On September 21, the Mujibnagar
administration (1st government of independent Bangladesh) agreed to dispatch a 16-member team led
by Justice Abu Saeed stationed in London. On September 25, the Bangladesh delegation convened
and nominated Fakir Shahabuddin as the party's member secretary. Bangladesh was not a member of
the United Nations before then. In this situation, the delegation had a tough time entering the UN
building. Pakistan, in particular, tried to label the delegation as'rebellious elements. Even in this
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 13
hostile climate, this group continued to engage in creative and intellectual activities as the Mujibnagar
government's representation on the United Nations premises. Currently, the President of the United
Nations Association of Journalists. Yogendranath Banerjee assisted the team in entering the United
Nations building. at October, the Bangladesh delegation conducted a plenary news conference at a
space at the Church Center, located on the west side of 777 United Nations Plaza. As a result, the
Bangladeshi representation to the United Nations actively participated in mobilizing global opinion on
Bangladesh's favor.
The 26th meeting of the General Assembly
The latter portion of the September 1971 UN headquarters conference focused on the membership of
the People's Republic of China and the status of Bangladesh. The 26th session failed to resolve the
issue of the 'Bangladesh dilemma'. Bangladesh has been cited in the annual report of the Secretary
General and in remarks made by national representatives. In his report, UN Secretary-General U
Thant emphasized the imperative for the international community to provide comprehensive
assistance to governments and peoples in the event of a large-scale disaster. In UN Document A/8411,
I have asserted that the only viable resolution to the underlying issue lies in a political approach
centered around reconciliation and humanitarian principles.
This session's official and informal assembly of country representatives at the UN
headquarters focused on China's UN membership and Bangladesh. New Zealand, Madagascar,
Luxembourg, Belgium, Norway, and Sweden stressed the subcontinental situation before the UN
General Assembly and demanded a quick settlement. Pakistan was told to restore a popular
administration in East Pakistan by France and Britain. The Soviet Union no longer regarded the
situation a Pakistani issue. Pakistan only had ambivalence and leniency from the US. Luxembourg's
delegate asked, "When we witness millions of people suffering indescribably, being brutally punished
in the guise of national security, and civilized society's weakest losing their rights, In the sake of
national sovereignty and security, should such cruelty continue? On Sept. 29, Canadian Foreign
Minister Michelle Sharpe said, 'When an internal conflict is moving so many nations so directly,
would it be right to consider it an internal matter?' Pakistan was advised to be flexible by Sweden. He
remarked that "it would behove Pakistan to respect human rights and accept the public opinion
declared through voting". The US sessionally backed Pakistan and said, "Pakistan's internal issues
will be dealt with by the people and government of Pakistan."
The East Pakistan problem had generated a worldwide catastrophe, and Pakistan's
ruthlessness had caused millions of refugees to cross the border and seek asylum in other nations. In
session, the French foreign minister remarked, "If this injustice cannot be corrected at the root, the
flow of refugees will not stop." Belgians repeated Schumann's query, "Will the return of the refugees
be possible?" He noted "a political and constitutional solution to this crisis must be found”. This
remedy should come from public opinion. Only when they are confident in the future that human
rights will not be abused will refugees return home. British Foreign Secretary Sir Alec Hume was
clear about the solution (Muhith, 2014). The statements of these countries are arranged in a table and
some important questions are answered for it. These are:
a. States that have identified the Bangladesh question as a political issue;
b. b. States that have termed it only as a humanitarian problem;
c. States that have identified the matter as Pakistan's internal affairs;
d. Only those countries that have spoken of genocide and human rights violations;
Country Problem
description
References to
both political and
humanitarian
aspects
Paying
attention to
humanitarian
issues
Internal
Affairs of
Pakistan
Genocide and
human rights
violations
Afghanistan * *
Albania
Algeria * *
Argentina * * *
Australia * * *
Austria * *
Bahrain
Barbados
Belgium * *
Bhutan
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 14
Bolivia
Botswana
Brazil
Bulgaria
Burma
Burundi
Belarus
Cameroon
Canada
Central African
Republic
Sri Lanka (Ceylon) * *
Chad
Chile * *
China * *
Colombia
Congo
Costa Rica
Cuba
Cyprus * *
Czechoslovakia
Secretary General's Good Office Proposal
At the 26th General Session, governments, the international media, and the people put pressure on UN
Secretary General "U Thant" to take a new crisis action for Bangladesh. On October 20, he gave India
and Pakistan his good office. The Secretary General said, "In this potentially very dangerous situation,
I feel that it is my duty as Secretary General to do everything I can to help the government
immediately concerned avoid any disaster." I want you to know that my offices are always open if
you need help (UN Doc, S/10410:6).
This letter of the Secretary General implies that he views the matter as an India-Pakistan war.
President Yahya Khan also wanted a Pak-India confrontation. Yahya Khan informed the Secretary
General a day later that Pakistan had accepted this idea. I appreciate your willingness to provide your
good offices and hope you can visit India and Pakistan immediately to negotiate force withdrawal. I
am convinced this will benefit and advance peace. UN Doc, S/10410: 7
However, India did not reject the UN Secretary General's 'good office'. According to the
status of UN Secretary General and diplomatic etiquette, India could not reject this plan outright,
therefore it rejected it indirectly. The Secretary General's recommendation came as Indira Gandhi was
touring the world to promote Bangladesh's liberation fight. Upon returning from abroad, he informed
the Secretary General on November 16 that the military rule of Pakistan was a severe threat to
national life and security.
Indira Gandhi said that Pakistan wants to make problems within Pakistan into problems
between India and Pakistan. Second, we can't ignore the reason why people are crossing borders as
refugees. Indira Gandhi kindly told the Secretary General that instead of India and Pakistan meeting,
Yahya Khan and the leaders of the Awami League should do it. "It's always nice to meet you and talk
about our ideas," she said. We will back your efforts to find a political solution in East Bengal that
meets the stated needs of the people, as long as you are ready to look at the situation in a broader
context (Keesings, 1972).
In his response, the Indian Prime Minister said that the UN Secretary-General was guilty. In
order to protect the Pakistani junta, the Secretary General is avoiding the main problem. In a message
to the Prime Minister of India on November 22, the Secretary-General denied the charges, saying that
good office requires everyone to work together. In this very important and complicated case, there
doesn't seem to be a reason for the Secretary General to help. 10 (UN Doc S/10410). The UN
Secretary-General's "Good Office" project in the subcontinent stopped when this message was sent.
1606th Session of the Security Council (December 4, 1971)
On December 3, India entered the Pakistan War, threatening peace and stability in one of the world's
most populated areas. Both nations reported the incident to the UN Secretary General on December 4.
After thoroughly evaluating the problem, the Secretary-General requested a Security Council session
from Council President Jakob Malik (Soviet Union) (The New York Times, 4 December 1971).
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 15
The 1606th Security Council session (5 permanents—US, Soviet Union, China, UK, France—
and 10 non-permanents—Argentina, Belgium, Burundi, Italy, Japan, Nicaragua, Sierra Leone,
Somalia, Syria) meets on December 4, 1971. Justice Abu Saeed Chowdhury, the Bangladesh
delegation leader, asked the Security Council President to advocate for the Mujibnagar administration
before the meeting. The Security Council President proposed listening to Justice Abu Saeed
Chowdhury's remarks as Bangladesh's envoy at the start of the meeting.
A lengthy Security Council debate on hearing Justice Abu Saeed Chowdhury's remarks from
Bangladesh. The council president presented two ideas in response to criticism.
a. Permit the letter to be circulated as a Security Council document from Justice Abu Saeed
Chowdhury, the representative of Bangladesh.
b. The council should allow Justice Abu Saeed Chowdhury to speak as a representative of the
people of Bangladesh.
The majority of nations did not object to the speech's delivery on the grounds of principle, thus
the Council President issued an order granting the request to present the resolution. However, because
to a lack of required support, the President rejected Justice Chowdhury's second motion to join the
Security Council debate (UN Doc, S/PV/1606).
The Security Council extended an invitation to the representatives of India and Pakistan to
make remarks. The first speaker was Agha Shahi, Pakistan's Permanent Representative to the UN. He
charged India with breaking Articles 2(4) and 2(7) of the UN Charter in his long statement, and he
called on the UN to take responsibility for safeguarding Pakistan's territorial integrity (UN Doc,
S/PV/1606: 49–148). In his remarks, Samar Sen, India's Permanent Representative to the UN, stated,
"The enemy is sidestepping the core problem and falsely condemning India. According to him, this
problem has resulted from the strategy of putting seven crore Bengalis under weapons control.
Despite the fact that Sheikh Mujib was predicted by Yahya Khan to become Pakistan's prime minister,
nobody is certain of his current whereabouts. Bengalis have won elections but have not been granted
authority, which is why Samar Sen supports their independence. This led them to launch nonviolent
movements as well, but these were also put down by massacres. They are therefore quite justified in
demanding their right to self-determination. According to UN Doc, S/PV/1606: 150–85, he stated that
the ceasefire should be between the Pakistan Army and Bangladesh, not between India and Pakistan.
1. The United States of America's Security Council Resolution (S/10416)
Following the keynote addresses by the Indian and Pakistani delegates, US Representative
George Bush Sr. charged India of aggressiveness. 'Immediate ceasefire between India and Pakistan,
withdrawal of the armies of both countries to their respective borders, deployment of United Nations
observers on the India-Pakistan border, taking all necessary steps for the repatriation of refugees' (UN
Doc, S/10416) was one of the seven points of his resolution. Every Security Council member
participated in the discussion of the US proposal.
2. Belgium, Italy, and Japan's Proposals (S/10417)
Belgium, Italy, and Japan submitted a five-point draft resolution to the Security Council in
response to the US proposal. In line with the UN Charter's tenets, the draft resolution calls on "the
governments of both countries to immediately cease hostilities and all forms of hostilities and to take
necessary measures for the rapid and voluntary repatriation of refugees" (UN Doc, S/10417).
3. The Soviet Union's Security Council Resolution (S/10418)
At opposition to the American plan, the Soviet Union put out a two-point draft resolution at the
UN Security Council's 1606th resolution calling for an end to hostilities in East Pakistan. "A political
solution in East Pakistan, which would end hostilities there and at the same time stop all terrorist
activities by the Pakistan Army in East Pakistan," was what the Soviet proposal demanded (UN Doc,
S/10418).
4. The Argentine, Nicaraguan, Sierra Leonean, and Somalian proposals (S/10419)
Argentina, Nicaragua, Sierra Leone, and Somalia sent the Security Council a two-point draft
resolution (S/10419) at the Soviet Union's advice. Under the draft resolution (UN Doc, S/10416), both
nations must "immediately ceasefire and withdraw" and the Secretary-General is to "keep the Security
Council regularly informed of the situation."
The Security Council heard four resolutions during its 1606th meeting. Following a thorough
discussion and debate, the president of the Security Council presented the US proposal—one of four
draft proposals—for voting among the Security Council's member nations for acceptance.
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 16
1
st veto of the Soviet Union in favor of Bangladesh in the Security Council (S/10416)
In favor of the US proposal Abstain from voting Against the US proposal
Belgium, Burundi, Italy, Japan,
Nicaragua, Sierra Leone, Somalia, Syria.
United Kingdom,
France
Soviet Union, Poland
When the US accused India of withdrawing soldiers in the Security Council, 11 voted yes and
the Soviets and Poland no. Neither the UK nor France voted. Permanent Security Council member the
Soviet Union vetoed the motion. Soviet Union's 106th UN Security Council veto (UN Doc,
S/PV/1606: 357-71).
1607th Emergency Session of the Security Council (December 5, 1971)
The Security Council convened its 1607th session on December 4 at 2.30 p.m. on December 5. The
fact that Tunisia from Africa and Saudi Arabia from Asia, neither Security Council members, can
speak makes this session unique. They attended at the Security Council President's request. I.B.
Tarlor-Kamara (Sierra Leone) chaired this Security Council session (UN Doc, S/PV/1607).
The Resolution of China (S/10421)
This session featured a Chinese resolution draft. China's plan termed India an aggressor and chastised
it for establishing Bangladesh. China "demands the unconditional and immediate withdrawal of the
Indian army occupying Pakistani territory" (UN Doc, S/10421).
After China's draft proposal, the Tunisian ambassador spoke for Africa. He said, "The
Security Council should also call for a ceasefire, so that peace can be established according to the
various clauses of the Charter". The Asian Saudi representative then spoke. According to Saudi envoy
Jamil Baroodi, "He called for a meeting of Asian heads of state on the subcontinent to get rid of the
politics of the big powers." After the Saudi delegate, the Soviet representative mentioned a draft
proposal (S/10422, December 5, 1971).
The Soviet Union said a 'ceasefire may be a temporary solution but a permanent one would
need a political accord between India and Pakistan'. The Soviet delegate accused the US and China of
disregarding two major issues for "temporary interests". Pakistan and India spoke in the Security
Council after the Soviet representative. After Pakistan and India spoke, the Council President
informed the Security Council that the Council now has three resolutions: S/10418 (Soviet Union),
S/10421 (China), and S/10423 (8 Nations). S/10417 and S/10419 are no longer before the House since
the same state presented the 8-nation resolution (S/10423), which complements them. The Council
President voted on the Soviet proposal first (UN Doc, S/PV/1607:75-201).
Consequences of the Soviet Union's (S/10418) proposal
In favor of the Soviet
Union
Abstain from voting Against the proposal
Soviet Union
Soviet Union, Poland United States, United Kingdom,
France, Argentina, Belgium,
Burundi, Italy, Japan, Nicaragua,
Sierra Leone, Somalia, Syria
China
The Chinese veto caused the idea to be rejected. The majority of members were not
convinced by this suggestion either. Furthermore, throughout the speech, those who chose not to vote
expressed their opposition to the idea. When the Chinese proposal (S/10421) was put to a vote by the
Council President following the vote on the Soviet proposal, the Chinese delegate stated that they
were still in consultation with other Council members. No vote was held on the Chinese proposal as
China indicated no interest in holding a vote on it. The eight-nation draft proposal, headed by
Argentina, was then put to a vote by the Council President.
2
nd veto of the Soviet Union in favor of Bangladesh in the Security Council (S/10423)
In favor of the 8 nation proposal Abstain from voting Against the US proposal
USA, China, Argentina, Belgium, Burundi,
Italy, Japan, Nicaragua, Sierra Leone,
Somalia, Syria.
United Kingdom, France Soviet Union, Poland
This idea received 11 votes. UK and France refused to vote. Soviet and Polish votes were no.
After the Soviet Union vetoed it again, the eight-nation armistice failed (UN Doc, S/PV/1607: 230-
331). The French delegate called such motions and counter-motions 'presumptive' after the 8 Nations'
resolution voting. After voting on the 8 Nations resolution, the Council President notified the Council
of two further resolutions (S/10421) and (S/10425). The Security Council President exhorted member
nations to find a solution and postponed the discussion until 3.30 pm the next day.
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 17
The Proposals by 8 Nations (S/10423)
In this session of the Security Council, the 8 member states of the Provisional Council (Argentina,
Belgium, Burundi, Italy, Japan, Nicaragua, Sierra Leone and Somalia) led by Argentina put forward a
proposal of three points. The resolution called for a ceasefire and the creation of an environment for
refugee return (UN Doc, S/10423).
The Proposals by 6 Nations (S/10425)
At the 1607th session of the Security Council, six nations—Belgium, Italy, Japan, Nicaragua, Sierra
Leone and Tunisia—proposed another three-point resolution. This proposal statesa. Urges governments to immediately implement a cease-fire.
b. Request the Secretary General to update the Council on the resolution's implementation.
c. The UN Doc, S/10425, recommends continuing to consider methods to restore peace in the
region.
1608th meeting of the Security Council (December 6, 1971)
The 1608th Security Council session was place at 3.30 pm on December 6, 1971. This session, like
the previous ones, allowed India, Pakistan, and Tunisia from Africa and Saudi Arabia from Asia to
debate. I.B. Tarlor Kamara (Sierra Leone) convened this Security Council session (UN Doc,
S/PV/1608:1-5).
Soviet Union Resolution (S/10426)
Soviet delegate offered a new resolution with two revisions to the six-nation draft resolution
(S/10425) early in this session. (In operative paragraph 1, replace ‘the Governments concerned’ with
'all parties concerned' and add 'and cessation of all hostilities').
Peace Proposal Unity Formula (S/10429)
In the wake of Security Council impasse, the 11 member nations discussed bringing the issue to the
General Assembly informally. Following discussions, Argentina, Somalia, Nicaragua, Sierra Leone,
Burundi, and Japan presented a draft resolution (S/10429) to the Security Council, recommending a
special session of the UN General Assembly if permanent members failed to reach consensus at the
1606th and 1607th meetings. This proposal followed the 3 November 1950 General Assembly
decision [377 A (V)]. Many call it 'Unity for Peace Exercise'. Since the UN Security Council is
deadlocked, the General Assembly implements portions of this formula for world peace and security.
Soviet Union Resolution (S/10428)
The Soviet Union introduced another draft resolution late in this session. In a five-point draft
resolution, the USSR urged that "all parties concerned should immediately cease hostilities and
implement a cease-fire." The 1970 elections called for a political solution in Pakistan to cease
hostilities. The UN Secretary-General should execute this decision and continue peace talks in the
area. After briefly discussing the draft resolutions in the Security Council, the President decided to
vote for the Unity Formula for Peace resolution (S/10429) to take initiative because the Soviet and
Chinese resolutions (S/10428) and (S/10421) would fail.
Consequences of the Unity Formula for Peace proposal in the Security Council
In favor of the US proposal Abstain from voting Against the US proposal
USA, China, Argentina, Belgium, Burundi,
Italy, Japan, Nicaragua, Sierra Leone, Somalia,
Syria
*** United Kingdom, France,
Soviet Union, Poland
After this proposal was passed, the United Nations started to implement the Unity Formula
for Peace (UN Doc, S/PV/1608) with the aim of ending the war in the subcontinent. To protect
Pakistan, China took the initiative to send this proposal to the UN General Assembly.
26th (Special Session) of the General Assembly
According to the Security Council's December 6 decision, the 26th extraordinary session of the
General Assembly was convened at the UN on December 7. The 26th Special Session of the General
Assembly saw three proposals:
A. Proposal by 13 nations (A/L/647).
B. The 34-nation Argentine-led plan (A/L/647 Rev.) and Soviet proposal (A/L/646) were detailed.
For 12 hours on December 7, the General Assembly considered 3 draft proposals. debate
included 58 of 131 General Assembly nations.
The Resolution of 13 States to the General Assembly (A/L/647)
Thirteen member states introduced a draft resolution for General Assembly debate at the start of this
session. The 13 states' suggestions mainly included the following:
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 18
a. Urge Pakistan and India to immediately halt hostilities and return their soldiers to their
respective boundaries.
b. Boost attempts to repatriate refugees.
c. (c) The Secretary-General will urge that decisions of the Security Council and the General
Assembly be implemented.
d. In view of the existing resolution (UN Doc, A/L 647), urge the Security Council to respond
appropriately.
The Resolution of 34 States to the General Assembly (A/L/647 Rev-1)
The General Assembly received a draft resolution from 34 governments, chaired by Argentina and
backed by the US, Muslim nations, and China. 'Immediately effective Indo-Pakistani ceasefire and
evacuation of Indian troops from East Pakistan, respecting the concept of the integrity of Pakistan'
was the 34-state resolution's heart.
The Resolution of Soviet Union (A/L/648)
The Soviet Union's proposal states, "Ceasefire may be a temporary solution, but a permanent solution
requires a political agreement between India and Pakistan" (UN Doc, A/L 648).
Countries Participating in the Debate in the Special Session (26th) of the United Nations
General Assembly
Asia Africa Europe Middle and South
America
Others
Bhutan Algeria Albania Argentina Australia
Sri Lanka Burundi Bulgaria Brazil Fiji
China Chad Czechoslovakia Chile New Zealand
Cyprus Gabon Denmark Ecuador United States
India Ghana France Mexico
Indonesia Ivory Coast Greece Nicaragua
Iran Madagascar Italy Peru
Japan Mauritania Netherlands Uruguay
Lebanon Sierra Leone Poland
Malaysia Somalia Portugal
Mongolia Sudan Soviet Union
Nepal Tanzania Sweden
Pakistan Togo Britain
Saudi Arabia Tunisia Yugoslavia
Turkey Hungary
Jordan
Quake
17 country 14 country 15 country 8 country 4 country
Source: Prepared by reviewing various UN documents.
Following deliberation in the General Assembly, the President of the Assembly, Adam Malik
(former Minister of Foreign Affairs of Indonesia), approved the motion put up by 34 nations,
spearheaded by Argentina, for vote in the General Assembly (amended). This decision was made in
accordance with Rule 93 of the Rules of Procedure, which governs the process.
Voting results on 34 state resolutions in the General Assembly
34 in favor of the State proposal Abstain from voting Against the proposal of 34 states
104 states 11 states 16 states
It was supported by 104 nations, Negative vote from 16 nations and 11 nations cast no votes.
General Assembly resolution sent to Security Council for execution same day. UN Under-SecretaryGeneral telegraphed India and Pakistan of the General Assembly's resolution (UNGA Resolution,
2793).
1611th Meeting of the Security Council (December 12, 1971)
While the UN General Assembly adopted the ceasefire resolution, the battle continued and Pakistan
soldiers in Dhaka fell. On December 12, George Bush (Senior) requested a quick ceasefire from the
Secretary General in the Security Council (S/10444). Thus, the 1611th Security Council meeting took
place at 4 p.m. A large delegation from India led by Foreign Minister Sardar Swaran Singh attended
this summit. Pakistan sent a mission led by recently appointed Deputy Prime Minister and Foreign
Minister Zulfiqar Ali Bhutto to boost diplomatic efforts (UN Doc, S/PV/1611).
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 19
The Resolution of the United States (S/10446)
The US proposed a draft resolution to a Security Council emergency meeting on 12 December. The
seven-point resolution demanded 'prompt ceasefire and army withdrawal' (UN Doc, S/10446). In this
Security Council resolution, the US and USSR had opposite stances. The US and China publicly
supported Pakistan. The council president adjourned the meeting at 12.35 pm to meet again the next
day.
1613th meeting of the Security Council (December 13, 1971)
The Security Council had its 1613th session at 3 p.m. on December 13. In addition to Security
Council members, India, Pakistan, Saudi Arabia, and Tunisia attended this meeting. The meeting
opened with US draft resolution (S/10446) talks. The Council president let Poland's representative
speak first. George Bush, US representative, said, India bears the major responsibility for broadening
the crisis by rejecting the UN's efforts to become involved, even in a humanitarian way, in relation to
the refugees, rejecting proposals like our Secretary General's offer of good offices, which could have
defused the crisis, and rejecting proposals that could have started a political dialogue. (UN Doc, A/PV
2002: 130-141).
Chinese envoy Chiao remarked, "India conspires with Bengali refugees like Tibetan
refugees." He called India a "outright aggressor" pursuing South Asian domination. He further said
the Soviet Union is the principal backer of Indian aggression. China wants a ceasefire and the
evacuation of both nations' forces (UN). Doc, A/PV 2002: 141-146).
In his speech, the Soviet Union delegate observed, 'The businesspeople and fanatics who
brought this subject before the General Assembly have blinded their eyes to the true situation in the
Indian subcontinent. They are concealing the major reasons of the dispute without examining the
issue. He dubbed this project China-US Collude. China asserts it uses the forum for anti-Soviet
propaganda (UN Doc, A/PV 2003: 173-185). The President of the Council voted on the United States'
updated draft resolution (S/10446/Rev.1) for Security Council approval after debate. The third veto by
the Soviet Union, a permanent UN Security Council member, reversed the cease-fire resolution (UN
Doc, S/PV/1613: 174).
3
rd veto of the Soviet Union in favor of Bangladesh in the Security Council (S/10446/Rev.1)
In favor of the US proposal Abstain from voting Against the US proposal
USA, China, Argentina, Belgium, Burundi,
Italy, Japan, Nicaragua, Sierra Leone,
Somalia, Syria
United Kingdom, France Soviet Union, Poland
The Proposal by Italy and Japan (S/10451)
After voting on the US proposal, Italy and Japan jointly presented another draft resolution at this
session of the Security Council. There were total of nine points in this proposal. The main point of the
resolution was to 'maintain the national integrity of Pakistan and reach a comprehensive political
solution to this crisis' (UN Doc, S/10451).
The 1614th meeting of the Security Council took place on December 14, 1971.
The 1614th Security Council meeting commenced at 12.10 pm on December 14th. The meeting did not
achieve a consensus. Britain engaged in discussions with other members of the Council, namely
France, in order to develop a new proposal that would meet the approval of all parties involved.
Poland has presented a draft resolution (S/10453) to the President of the Council, outlining a six-point
plan for a ceasefire.
Here, the Security Council meeting system was addressed. After discussing their
recommendations, Britain and Poland requested that the conference be deferred until the next day for
government orders. All Council members agreed, save China's moderate reservations. To permit
formal deliberations on the British-French and Polish proposals, the Council President postponed the
meeting (UN Doc, S/PV/1614: 49).
1615th meeting of the Security Council (December 15, 1971)
The 1615th Security Council meeting was conducted at 7.20 pm on December 15. At the Council
President's request, India and Pakistan delegates attended this meeting. Meeting attendees discussed
four draft suggestions. Polish proposal (UN Doc, S/10453/Rev-1), France and Britain's resolution,
Syria's resolution, and Soviet Union's resolution. Polish proposals included 'ceasefire and departure of
West Pakistani soldiers from East Pakistan'. "Pakistani political prisoners should be released, so that
they can implement their mandate in East Pakistan" declared the Syrian draft resolution. After
negotiations, the UK and France proposed a Syrian-like draft resolution. The concept addresses
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 20
ceasefire in the east and west of the subcontinent individually. The idea called for political settlement
discussions with elected officials. Britain, France, and the Soviet Union made similar proposals. The
Soviet Union demanded a thorough political solution with East Pakistan's elected representatives. A
cease-fire must also be announced (UN Doc, S/PV/1615).
The Chinese representative began with a speech. China's representative said, 'The Security
Council should respect Pakistan's independence, sovereignty, national unity and geographical
integrity' (UN Doc, S/PV/1615: 13). The President of the Council asked the Sri Lankan representative
to speak after the Chinese speaker (UN Doc, S/PV/1615: 13). The Council President invited the Sri
Lankan delegate to speak after the Chinese representative. Sri Lankan representative: "Sri Lanka
seeks a neutral solution. He said, 'This solution should be one where triumph is devoid of difficulties,
loss is without consequence and above all peace prevails' (UN Doc, S/PV/1615: 22). Pakistan's
Deputy Prime Minister and Foreign Minister Zulfikar Ali Bhutto's statement on these suggestions was
spectacular. The Security Council was strongly criticized in his passionate address. He called the
Security Council stage of deceit and farce' He instructed the Security Council to legitimize every
unlawful occurrence until December 15, establish a harsher treaty than Versailles, and legalize the
occupation. We will fight without me. I shall withdraw but fight again. My country calls. Why waste
time on the Security Council? I refuse to participate in such a disgraceful surrender of my nation. He
urged the General and Security Council to remove the ‘monument of failure' He concluded his
Security Council remarks. They rip up draft resolutions of four nations, including Poland, and I go
(UN Doc, S/PV/1615: 84). Pakistani delegates left the Security Council. Pakistani delegates left the
Security Council. Accepting Poland's suggestion (UN Doc, S/10453) may have benefited Pakistan.
India 'although grudgingly' approved the idea with Soviet help. The Pakistani military would not have
surrendered humiliatingly if the delegates had accepted the idea.
The Council President called Poland's proposal timely out of 4 drafts. The Security Council
discussed four draft ideas, but none of the member nations indicated interest in voting. Instead, they
continued to deliberate. Thus, the Council President adjourned the meeting till 10.30 am on December
16 (UN Doc, S/PV/1615:139).
1616th meeting of the Security Council (December 16, 1971)
The 1616th Security Council meeting was conducted at 10:30 am on December 16. The Security
Council President invited Indian Foreign Minister Sardar Swaran Singh, Saudi Ambassador Mr. Jamal
Baroodi, Tunisian representative, and Sri Lankan representative to this meeting. The President stated
that five draft resolutions await decision before the Council: Italy and Japan (S/10451), Poland (UN
Doc, S/10453/Rev-1), Syria (UN Doc, S/10456), France and Britain (UN Doc, S/10455), and the
Soviet Union (UN Doc, S/10457). The Chinese and Soviet draft resolutions (S/10421) and (S/10428)
were not vetoed (UN Doc, S/PV/1616: 3).
Indian External Affairs Minister Sardar Swaran Singh read Indira Gandhi's statement after the
President's opening remarks. This statement included two main points.
a. Pakistani army surrendering in Dhaka created Bangladesh.
b. India's Western Front ceasefire (UN Doc, S/PV/1616:5).
At 1.10 pm, the 1616th Security Council meeting finished.
1617th meeting of the Security Council (December 16, 1971)
The Foreign Minister of India proclaimed the creation of Bangladesh via the surrender of Pakistani
soldiers in Dhaka at 3.00 pm in the 1616th and 1617th Security Council meetings. Besides Security
Council members, India, Pakistan, Tunisia, and Saudi Arabia attended this meeting. A Soviet draft
resolution (S/10458) welcomed India's ceasefire proposal during this conference. Japan and the US
presented a seven-point draft resolution (S/10450) on Geneva Conventions (1949) compliance,
including refugee safe return, during the conference. It then proposed S/10459/Rev.1, revising this
plan. Meeting terminated at 9.45 pm without Security Council resolution (UN Doc, S/PV/1617).
1620th meeting (Final meeting) of the Security Council (December 21, 1971)
The UN Security Council was unable to achieve a compromise despite the increasing tensions in
Bangladesh and the unilateral ceasefire declared by India. Argentina, Burundi, Italy, Japan,
Nicaragua, Sierra Leone, and Somalia together presented Security Council resolution S/10465 on
December 21. The resolution sought to 'monitor a cessation of hostilities and encourage all relevant
parties to comply with the provisions of the Geneva Conventions'. During the plenary session, the
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 21
resolution received support from 13 states, but the Soviet Union and Poland chose not to vote (UN
Doc, S/PV/1620).
Consequences of the provisional 7 state resolution of the Security Council
In favor of the proposal Abstain from voting Against the proposal
United States, China, United Kingdom, France,
Argentina, Belgium, Burundi, Italy, Japan,
Nicaragua, Sierra Leone,
Somalia, Syria
Soviet Union, Poland ***
The Security Council eventually approved the ceasefire. The eventful 26th (special) General
Assembly session ended on 22 December after the Security Council passed the resolution. Bangladesh
attained independence without UN assistance.
Conclusion
The Bengali liberation war with Pakistani forces in besieged Bangladesh lasted from March 26 to
December 16, 1971. The UN did nothing to address genocide and human rights in East Pakistan
during the Liberation War. Due to its dependency on the US, the UN could not address East Pakistan's
genocide and human rights abuses. The UN's good contribution in alleviating refugees' immediate
concerns in India has always been noted. The UN's greatest refugee aid effort in Bangladesh occurred
in 1971. At the time, the UN did not prioritize political issues in establishing a lasting refugee
solution. Major nations preferred geopolitical and national solutions outside the UN. Bangladesh has
not been resolved by the UN Security and General Assembly. The US and China had a 'leaning
strategy' toward Pakistan and the USSR toward India. The Soviet Union's veto has frequently
thwarted China-US Security Council efforts to unify Pakistan and prevent Bangladesh's accession.
Pakistan's statehood was supported by 104–11 votes in the UN General Assembly's Bangladesh
resolution. The vote supported national integration (United Pakistan) in 1971. However, superpowers
like France and Britain remained neutral, helping Bangladesh gain independence. Bangladesh became
independent on December 21, 1971, when the Security Council passed an anti-war resolution
(S/10465) without UN involvement.
References
Ayoob, M. (1972). The United Nations and the India-Pakistan Conflict. Asian Survey, 12(11), 977-
988. https://doi.org/10.2307/2642776
Azad, A. K. (2013). Bangladesh: From Nationhood to Security State. International Journal of Asian
Social Science, 3(7), 1516-1529.
Bina, D. (2011). The Role of External Powers in Bangladesh's Liberation War. Journal of South Asian
and Middle Eastern Studies, 35(2), 27-42.
Hossain, K. (2014). International Legal Aspects of the Bangladesh Liberation War of 1971. Journal of
Asian and African Studies, 49(5), 613-628. https://doi.org/10.1177/0021909613490131
Islam, S. M. (2012). The United Nations and the Bangladesh Crisis of 1971: A Legal Perspective.
Asian Journal of International Law, 2(2), 401-421. https://doi.org/10.1017/S2044251312000
172
Mookherjee, N. (2011). The Bangladesh Genocide: The Plight of Women during the 1971 Liberation
War. Gender, Technology and Development, 15(1), 101-114. https://doi.org/10.1177/097185
241001500105
Raghavan, S. (2013). 1971: A Global History of the Creation of Bangladesh. Harvard University
Press.
Sisson, R., & Rose, L. E. (1991). War and Secession: Pakistan, India, and the Creation of Bangladesh.
University of California Press.
Sobhan, R. (1982). The Crisis of External Dependence: The Political Economy of Foreign Aid to
Bangladesh. University Press Limited.
Tahmina, Q. (2001). The UN and the Bangladesh Liberation War of 1971: Interventions and
Consequences. Journal of International Affairs, 55(2), 453-469.UN Doc, S/10410, Para 6-10.
UN Doc, S/PV/1606, Para 1-371, 5 December, 1971.
UN Doc, S/10416, 4 December, 1971.
UN Doc, S/10417, 4 December, 1971.
UN Doc, S/10418, 4 December, 1971.
UN Doc, S/10419, 4 December, 1971.
UN Doc, S/PV/1607, Para 1-234, 5 December, 1971.
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 22
UN Doc, S/10421, 5 December, 1971.
UN Doc, S/10423, 5 December, 1971.
UN Doc, S/10425, 5 December, 1971.
UN Doc, S/PV/1608, Para 1-187, 6 December, 1971.
UN Doc, S/10426, 6 December, 1971.
UN Doc, S/10428, 6 December, 1971.
UN Doc, S/10429, 6 December, 1971.
UN Doc, A/L 647, 7 December 1971.
UN Doc, A/L 647/Rev-1, 7 December 1971.
UN Doc, A/L 648, 7 December 1971.
UN General Assembly Resolution 2793, Vol- XXVI.
UN Doc, S/PV/1611, 12 December 1971.
UN Doc, S/10446, 12 December 1971.
UN Doc, S/PV/1613, Para1-174, 13 December 1971.
UN Doc, A/PV 2002, PP.130-146.
UN Doc, A/PV 2003, PP.173-185.
UN Doc, S/10451, 13 December 1971.
UN Doc, S/PV/1614, Para1-49, 14 December 1971.
View publication stats | You can only respond to the prompt using information in the context block. Give your answer in bullet points. If you cannot answer using the context alone, say "I cannot determine the answer to that due to lack of context"
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/381796770
The United Nations' Involvement in Bangladesh's Liberation War: A Detailed
Analysis
Article in International Journal of Politics & Social Sciences Review (IJPSSR) · June 2024
CITATIONS
0
3 authors, including:
Md. Ruhul Amin
Comilla University
33 PUBLICATIONS 9 CITATIONS
SEE PROFILE
All content following this page was uploaded by Md. Ruhul Amin on 28 June 2024.
The user has requested enhancement of the downloaded file.
ISSN 2959-6467 (Online) :: ISSN 2959-6459 (Print)
ISSN 2959-6459 (ISSN-L)
Vol. 3, Issue I, 2024 (January – June)
International Journal of Politics & Social Sciences Review
(IJPSSR)
Website: https://ijpssr.org.pk/ OJS: https://ojs.ijpssr.org.pk/ Email: [email protected]
Page | 10
The United Nations' Involvement in Bangladesh's Liberation War: A Detailed Analysis
Md. Firoz Al Mamun 1
, Md. Mehbub Hasan 2 & Md. Ruhul Amin, PhD 3
1 Assistant Professor, Department of Political Science, Islamic University, Kushtia, Bangladesh
2 Researcher and Student, Department of Government and Politics, Jahangirnagar University, Savar, Dhaka1342
3
(Corresponding Author), Associate Professor, Department of Public Administration, Comilla University,
Cumilla, Bangladesh
Abstract
Liberation War, Bangladesh, United Nations, International Intervention, Conflict
Resolution.
Introduction
The 1971 Bengali nation's armed struggle for independence took on an international dimension; as the
conflict came to an end, India and Pakistan got directly involved, and the major powers and their
powerful allies started to actively compete with one another to establish an independent state of
Bangladesh. This effort included international and multinational aspects in addition to bilateral and
regional forms (Jahan, 2008:245). The bigger forum in this instance, where the major powers and
stakeholders participated in various capacities, was the UN. The major powers usually agree on
decisions made and carried out by the United Nations, a global institution. The decision-making
process is primarily a reflection of how the major powers see a given situation. The UN Security
Council may reach an impasse, in which case the General Assembly may adopt certain restricted
actions. Everything that occurred in 1971 took place during the Bangladesh crisis (Matin, 1990: 23).
With Bangladesh's ascent on December 16, the subcontinent's map underwent a reconfiguration.
Furthermore, the United Nations' involvement in these matters has primarily been restricted to
humanitarian efforts and relief activities. The Pakistan military attempted to stifle the calls for
freedom of the people of East Pakistan by genocide and ethnic oppression, which was thwarted by the
On March 26, 1971, the Bangladeshi independence struggle against domestic imperialism and
ethnic discrimination in Pakistan got underway. March 26, 1971, saw the start of the
Bangladeshi independence movement against domestic imperialism and ethnic discrimination
in Pakistan. The United Nations gave relief and humanitarian activities first priority starting in
the Liberation War and continuing until November. The UN Security Council was called in
when India and Pakistan entered the Liberation War on December 3. The Security Council
meetings continued as different suggestions and counterproposals were presented. In the
Security Council, there was a clash between the USSR and US. While the USSR helped
Bangladesh, China and the US helped Pakistan. Keeping their positions neutral, France and
Britain did not cast votes in the Security Council. The Security Council could not therefore
come to an agreement. On December 6, after discussion and an official decision, the Security
Council sent the agenda to the General Assembly. On December 7, a resolution headed "Unity
Formula for Peace" was overwhelmingly approved at the General Assembly. As India and
Bangladesh rejected this idea, the US called a second Security Council session. Sessions of the
Security Council were held at various intervals between December 12 and 21. Everything
changed dramatically when Bangladesh gained its independence on December 16. The
protracted Bangladesh war was essentially resolved on December 21 when the Security
Council unanimously approved a ceasefire resolution.
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 11
establishment of Bangladesh, under the standard pretexts of national integrity, internal affairs, etc. For
this reason, it is plausible to argue that Bangladesh's establishment following the dissolution of the
post-World War II state structure was a highly justifiable event. Following its declaration of
independence, Bangladesh joined a number of UN bodies in 1972 and attained full membership status
in 1974 (Hussain, 2012:189). There is a dearth of scholarship on the United Nations' involvement in
the Great War of Liberation. The discussion research is highly significant, and the author has logically
made a concentrated effort to examine and unearth material on the role of the organization in charge
of maintaining world peace and security throughout the Great War of Liberation.
Research Methodology
The article titled 'The Role of the United Nations in the Great Liberation War of Bangladesh: An
Analysis' has all of the basic aspects of social research. Data was gathered from secondary sources for
research purposes. The research was done using both qualitative and quantitative methods. The
research paper titled 'Role of the United Nations in the Great Liberation War of Bangladesh - An
Analysis' was analyzed using the 'Content Analysis Methodology'. Basically, the study effort was
done using secondary sources to acquire and analyze data and information.
The research relies on secondary sources, either directly or indirectly. The study was done by
gathering information from worldwide media coverage, UN documents, publications, research papers,
reports, archives relating to the liberation war, and records housed in the museum during Bangladesh's
War of Liberation (1971).
The Role of the United Nations in the early stages of the Liberation War
All UN employees were evacuated from Dhaka on March 25, 1971, the night the Pakistani armed
forces declared the liberation war through "Operation Searchlight." But it has not moved to halt the
atrocities against human rights and genocide in East Pakistan. On April 1st, nonetheless, the Secretary
General sent an emergency humanitarian offer to the Pakistani government for the inhabitants of East
Pakistan. Nevertheless, the Pakistani government turned down the offer of humanitarian assistance
and even forbade the Red Cross relief aircraft from landing in Dhaka (Hossein, 2012: 150). President
Yahya Khan gave the UN authorization to carry out rescue operations after the UN Secretary General
appealed to the Pakistani government on April 22 for immediate humanitarian aid. Beginning on June
7, 1971, the United Nations started assistance efforts in East Pakistan. The acronym UNROD stood
for the United Nations Relief and Works Agency for East Pakistan. United Nations recognized the
name "Bangladesh" on December 21 and dubbed the rescue agency "UNROD" (Time Magazine,
January 1, 1971). The surge of refugees entering India on April 23 was the reason the Indian
government made its first plea for outside assistance since the start of the liberation struggle.
Coordination in this respect was taken up by the United Nations High Commissioner for Refugees
(UNHCR). Other than UNHCR, UNICEF and WFP are involved in Indian refugee camps actively.
The World Bank estimates that the Indian government spent $1 billion on refugees overall up to
December, of which just $215 million came from UN assistance. By far the biggest airlift in UN
history (International Herald Tribune, July 8, 1971). India's committed and received monies from the
UN and other sources up to June were:
International Aid to India (June, 1971)
United Nation Other Sources Total
9,80,00,000 16,50,00,000 26,30,00,000
Source: International Herald Tribune, 8 July, 1971.
United Nations product aid to India
Topics Quantity
1. Food Aid 6267 tons
2. Vehicles 2200 piece
3. Medical supplies 700 tons
4. Polythene for making shelters What is needed for 3 million refugees
Source: Rahman, Hasan Hafizur (ed.) (1984) Bangladesh Liberation War Documents, Volume- 13,
Dhaka: Ministry of Liberation War Affairs, Government of the People's Republic of Bangladesh, page
783-87.
Though the UN participated in the relief effort, until September, no talks on matters like the
liberation struggle in Bangladesh, genocide, abuses of human rights, etc. were held in the UN. Even
Bangladesh was left from the September UN Annual General Discussion agenda. Still, throughout
their statements, the leaders of several nations brought up Bangladesh.
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 12
Proposal for deployment of United Nations observers in East Pakistan
Early in the Liberation War, India asked the UN to step in and handle the refugee crisis and put an end
to the genocide in East Pakistan. Yet, at first, Pakistan opposed the UN's intervention in the refugee
crisis, viewing any UN action as meddling in its domestic affairs (Hasan, 1994: 251–53). However,
Yahya Khan consented to embrace all UN measures as of May on US advice. Pakistan started
participating actively in diplomatic efforts in a number of UN forums during this period, with support
from Muslim nations and the United States.
In order for India to be compelled to cease aiding Bangladesh's independence movement as a
result of UN pressure. Acknowledging this, India vehemently objected to the UN's political role,
which it concealed behind humanitarian endeavors. Despite the fact that the UN Secretary General has
mostly been mute on ending genocide and breaches of human rights since the start of the Liberation
War, on July 19 he suggested that "UN peacekeepers or observers be deployed on the India-Pakistan
border to resolve the refugee problem." But the UN Secretary General's plan to send out troops or
monitors was shelved after the Mujibnagar administration and India turned down this offer (Hossein,
2012: 87).
According to Article 99 of the United Nations, the initiative of the Secretary General
The UN Secretary-General, U Thant, submitted a memorandum under Article 99 to the president of
the Security Council and member nations on July 20, 1971, the day following the request for the
deployment of observers. There were eight paragraphs or suggestions in the Secretary General's letter.
"Obviously, it is for the members of the Security Council themselves to decide whether such
consideration should be taken place formally or informally, in public or private," he stated in the note
(UN Doc, A/8401).
India, the primary backer of Bangladesh's independence movement, was put in a humiliating
position by the Secretary General's suggestions. The Soviet Union supported India in this
circumstance. India's principal foreign benefactor in the wake of the Soviet-Indian alliance's signature
was the Soviet Union. The Soviet Union asked the Secretary-General on August 20th not to call a
meeting of the Security Council to discuss the East Pakistan issue. As a result, the Security Council
did not meet, even on the Secretary General's suggestion. Major nations and interested parties
maintained their diplomatic efforts in anticipation of the United Nations General Assembly's 26th
session, which is scheduled to take place on September 21 (The Year Book of World Affairs, 1972).
United Nations Intervention in the Question of Bangabandhu's Trial
Sheikh Mujibur Rahman is set to face trial for treason in the final week of July, as reported by many
media sources. The Mujibnagar government promptly raised alarm following the publication of this
news. Sheikh Mujib is the unquestionable leader of Bangladesh's liberation movement. Consequently,
the Mujibnagar government formally requested the international community and influential nations to
ensure the safety and well-being of Sheikh Mujib's life (Joy Bangla, July 30, 1971). The trial of
Sheikh Mujibur Rahman commenced on August 9, 1971, under the authority of the Pakistani
government. On August 10, U Thant, the Secretary General of the United Nations, intervened in the
Pakistani military junta's attempt to bring Sheikh Mujibur Rahman to trial. The Secretary General
stated clearly that the topic at hand is highly sensitive and delicate, and it is the responsibility of the
legal system of Pakistan, as a member state, to handle it. It is also a subject of great curiosity and
worry in several spheres, encompassing both humanitarian and political domains. The Secretary
General has been regularly receiving expressions of grave concern from government representatives
regarding the situation in East Pakistan. It is widely believed that unless some form of agreement is
reached, the restoration of peace and normalcy in the region is unlikely. The Secretary General
concurs with several members that any advancements about the destiny of Sheikh Mujibur Rahman
would undoubtedly have repercussions beyond the borders of Pakistan. The article is from The
International Herald Tribune, dated August 10, 1971.
Delegation of Bangladesh to the United Nations
The United Nations General Assembly meets every September. On September 21, the Mujibnagar
administration (1st government of independent Bangladesh) agreed to dispatch a 16-member team led
by Justice Abu Saeed stationed in London. On September 25, the Bangladesh delegation convened
and nominated Fakir Shahabuddin as the party's member secretary. Bangladesh was not a member of
the United Nations before then. In this situation, the delegation had a tough time entering the UN
building. Pakistan, in particular, tried to label the delegation as'rebellious elements. Even in this
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 13
hostile climate, this group continued to engage in creative and intellectual activities as the Mujibnagar
government's representation on the United Nations premises. Currently, the President of the United
Nations Association of Journalists. Yogendranath Banerjee assisted the team in entering the United
Nations building. at October, the Bangladesh delegation conducted a plenary news conference at a
space at the Church Center, located on the west side of 777 United Nations Plaza. As a result, the
Bangladeshi representation to the United Nations actively participated in mobilizing global opinion on
Bangladesh's favor.
The 26th meeting of the General Assembly
The latter portion of the September 1971 UN headquarters conference focused on the membership of
the People's Republic of China and the status of Bangladesh. The 26th session failed to resolve the
issue of the 'Bangladesh dilemma'. Bangladesh has been cited in the annual report of the Secretary
General and in remarks made by national representatives. In his report, UN Secretary-General U
Thant emphasized the imperative for the international community to provide comprehensive
assistance to governments and peoples in the event of a large-scale disaster. In UN Document A/8411,
I have asserted that the only viable resolution to the underlying issue lies in a political approach
centered around reconciliation and humanitarian principles.
This session's official and informal assembly of country representatives at the UN
headquarters focused on China's UN membership and Bangladesh. New Zealand, Madagascar,
Luxembourg, Belgium, Norway, and Sweden stressed the subcontinental situation before the UN
General Assembly and demanded a quick settlement. Pakistan was told to restore a popular
administration in East Pakistan by France and Britain. The Soviet Union no longer regarded the
situation a Pakistani issue. Pakistan only had ambivalence and leniency from the US. Luxembourg's
delegate asked, "When we witness millions of people suffering indescribably, being brutally punished
in the guise of national security, and civilized society's weakest losing their rights, In the sake of
national sovereignty and security, should such cruelty continue? On Sept. 29, Canadian Foreign
Minister Michelle Sharpe said, 'When an internal conflict is moving so many nations so directly,
would it be right to consider it an internal matter?' Pakistan was advised to be flexible by Sweden. He
remarked that "it would behove Pakistan to respect human rights and accept the public opinion
declared through voting". The US sessionally backed Pakistan and said, "Pakistan's internal issues
will be dealt with by the people and government of Pakistan."
The East Pakistan problem had generated a worldwide catastrophe, and Pakistan's
ruthlessness had caused millions of refugees to cross the border and seek asylum in other nations. In
session, the French foreign minister remarked, "If this injustice cannot be corrected at the root, the
flow of refugees will not stop." Belgians repeated Schumann's query, "Will the return of the refugees
be possible?" He noted "a political and constitutional solution to this crisis must be found”. This
remedy should come from public opinion. Only when they are confident in the future that human
rights will not be abused will refugees return home. British Foreign Secretary Sir Alec Hume was
clear about the solution (Muhith, 2014). The statements of these countries are arranged in a table and
some important questions are answered for it. These are:
a. States that have identified the Bangladesh question as a political issue;
b. b. States that have termed it only as a humanitarian problem;
c. States that have identified the matter as Pakistan's internal affairs;
d. Only those countries that have spoken of genocide and human rights violations;
Country Problem
description
References to
both political and
humanitarian
aspects
Paying
attention to
humanitarian
issues
Internal
Affairs of
Pakistan
Genocide and
human rights
violations
Afghanistan * *
Albania
Algeria * *
Argentina * * *
Australia * * *
Austria * *
Bahrain
Barbados
Belgium * *
Bhutan
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 14
Bolivia
Botswana
Brazil
Bulgaria
Burma
Burundi
Belarus
Cameroon
Canada
Central African
Republic
Sri Lanka (Ceylon) * *
Chad
Chile * *
China * *
Colombia
Congo
Costa Rica
Cuba
Cyprus * *
Czechoslovakia
Secretary General's Good Office Proposal
At the 26th General Session, governments, the international media, and the people put pressure on UN
Secretary General "U Thant" to take a new crisis action for Bangladesh. On October 20, he gave India
and Pakistan his good office. The Secretary General said, "In this potentially very dangerous situation,
I feel that it is my duty as Secretary General to do everything I can to help the government
immediately concerned avoid any disaster." I want you to know that my offices are always open if
you need help (UN Doc, S/10410:6).
This letter of the Secretary General implies that he views the matter as an India-Pakistan war.
President Yahya Khan also wanted a Pak-India confrontation. Yahya Khan informed the Secretary
General a day later that Pakistan had accepted this idea. I appreciate your willingness to provide your
good offices and hope you can visit India and Pakistan immediately to negotiate force withdrawal. I
am convinced this will benefit and advance peace. UN Doc, S/10410: 7
However, India did not reject the UN Secretary General's 'good office'. According to the
status of UN Secretary General and diplomatic etiquette, India could not reject this plan outright,
therefore it rejected it indirectly. The Secretary General's recommendation came as Indira Gandhi was
touring the world to promote Bangladesh's liberation fight. Upon returning from abroad, he informed
the Secretary General on November 16 that the military rule of Pakistan was a severe threat to
national life and security.
Indira Gandhi said that Pakistan wants to make problems within Pakistan into problems
between India and Pakistan. Second, we can't ignore the reason why people are crossing borders as
refugees. Indira Gandhi kindly told the Secretary General that instead of India and Pakistan meeting,
Yahya Khan and the leaders of the Awami League should do it. "It's always nice to meet you and talk
about our ideas," she said. We will back your efforts to find a political solution in East Bengal that
meets the stated needs of the people, as long as you are ready to look at the situation in a broader
context (Keesings, 1972).
In his response, the Indian Prime Minister said that the UN Secretary-General was guilty. In
order to protect the Pakistani junta, the Secretary General is avoiding the main problem. In a message
to the Prime Minister of India on November 22, the Secretary-General denied the charges, saying that
good office requires everyone to work together. In this very important and complicated case, there
doesn't seem to be a reason for the Secretary General to help. 10 (UN Doc S/10410). The UN
Secretary-General's "Good Office" project in the subcontinent stopped when this message was sent.
1606th Session of the Security Council (December 4, 1971)
On December 3, India entered the Pakistan War, threatening peace and stability in one of the world's
most populated areas. Both nations reported the incident to the UN Secretary General on December 4.
After thoroughly evaluating the problem, the Secretary-General requested a Security Council session
from Council President Jakob Malik (Soviet Union) (The New York Times, 4 December 1971).
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 15
The 1606th Security Council session (5 permanents—US, Soviet Union, China, UK, France—
and 10 non-permanents—Argentina, Belgium, Burundi, Italy, Japan, Nicaragua, Sierra Leone,
Somalia, Syria) meets on December 4, 1971. Justice Abu Saeed Chowdhury, the Bangladesh
delegation leader, asked the Security Council President to advocate for the Mujibnagar administration
before the meeting. The Security Council President proposed listening to Justice Abu Saeed
Chowdhury's remarks as Bangladesh's envoy at the start of the meeting.
A lengthy Security Council debate on hearing Justice Abu Saeed Chowdhury's remarks from
Bangladesh. The council president presented two ideas in response to criticism.
a. Permit the letter to be circulated as a Security Council document from Justice Abu Saeed
Chowdhury, the representative of Bangladesh.
b. The council should allow Justice Abu Saeed Chowdhury to speak as a representative of the
people of Bangladesh.
The majority of nations did not object to the speech's delivery on the grounds of principle, thus
the Council President issued an order granting the request to present the resolution. However, because
to a lack of required support, the President rejected Justice Chowdhury's second motion to join the
Security Council debate (UN Doc, S/PV/1606).
The Security Council extended an invitation to the representatives of India and Pakistan to
make remarks. The first speaker was Agha Shahi, Pakistan's Permanent Representative to the UN. He
charged India with breaking Articles 2(4) and 2(7) of the UN Charter in his long statement, and he
called on the UN to take responsibility for safeguarding Pakistan's territorial integrity (UN Doc,
S/PV/1606: 49–148). In his remarks, Samar Sen, India's Permanent Representative to the UN, stated,
"The enemy is sidestepping the core problem and falsely condemning India. According to him, this
problem has resulted from the strategy of putting seven crore Bengalis under weapons control.
Despite the fact that Sheikh Mujib was predicted by Yahya Khan to become Pakistan's prime minister,
nobody is certain of his current whereabouts. Bengalis have won elections but have not been granted
authority, which is why Samar Sen supports their independence. This led them to launch nonviolent
movements as well, but these were also put down by massacres. They are therefore quite justified in
demanding their right to self-determination. According to UN Doc, S/PV/1606: 150–85, he stated that
the ceasefire should be between the Pakistan Army and Bangladesh, not between India and Pakistan.
1. The United States of America's Security Council Resolution (S/10416)
Following the keynote addresses by the Indian and Pakistani delegates, US Representative
George Bush Sr. charged India of aggressiveness. 'Immediate ceasefire between India and Pakistan,
withdrawal of the armies of both countries to their respective borders, deployment of United Nations
observers on the India-Pakistan border, taking all necessary steps for the repatriation of refugees' (UN
Doc, S/10416) was one of the seven points of his resolution. Every Security Council member
participated in the discussion of the US proposal.
2. Belgium, Italy, and Japan's Proposals (S/10417)
Belgium, Italy, and Japan submitted a five-point draft resolution to the Security Council in
response to the US proposal. In line with the UN Charter's tenets, the draft resolution calls on "the
governments of both countries to immediately cease hostilities and all forms of hostilities and to take
necessary measures for the rapid and voluntary repatriation of refugees" (UN Doc, S/10417).
3. The Soviet Union's Security Council Resolution (S/10418)
At opposition to the American plan, the Soviet Union put out a two-point draft resolution at the
UN Security Council's 1606th resolution calling for an end to hostilities in East Pakistan. "A political
solution in East Pakistan, which would end hostilities there and at the same time stop all terrorist
activities by the Pakistan Army in East Pakistan," was what the Soviet proposal demanded (UN Doc,
S/10418).
4. The Argentine, Nicaraguan, Sierra Leonean, and Somalian proposals (S/10419)
Argentina, Nicaragua, Sierra Leone, and Somalia sent the Security Council a two-point draft
resolution (S/10419) at the Soviet Union's advice. Under the draft resolution (UN Doc, S/10416), both
nations must "immediately ceasefire and withdraw" and the Secretary-General is to "keep the Security
Council regularly informed of the situation."
The Security Council heard four resolutions during its 1606th meeting. Following a thorough
discussion and debate, the president of the Security Council presented the US proposal—one of four
draft proposals—for voting among the Security Council's member nations for acceptance.
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 16
1
st veto of the Soviet Union in favor of Bangladesh in the Security Council (S/10416)
In favor of the US proposal Abstain from voting Against the US proposal
Belgium, Burundi, Italy, Japan,
Nicaragua, Sierra Leone, Somalia, Syria.
United Kingdom,
France
Soviet Union, Poland
When the US accused India of withdrawing soldiers in the Security Council, 11 voted yes and
the Soviets and Poland no. Neither the UK nor France voted. Permanent Security Council member the
Soviet Union vetoed the motion. Soviet Union's 106th UN Security Council veto (UN Doc,
S/PV/1606: 357-71).
1607th Emergency Session of the Security Council (December 5, 1971)
The Security Council convened its 1607th session on December 4 at 2.30 p.m. on December 5. The
fact that Tunisia from Africa and Saudi Arabia from Asia, neither Security Council members, can
speak makes this session unique. They attended at the Security Council President's request. I.B.
Tarlor-Kamara (Sierra Leone) chaired this Security Council session (UN Doc, S/PV/1607).
The Resolution of China (S/10421)
This session featured a Chinese resolution draft. China's plan termed India an aggressor and chastised
it for establishing Bangladesh. China "demands the unconditional and immediate withdrawal of the
Indian army occupying Pakistani territory" (UN Doc, S/10421).
After China's draft proposal, the Tunisian ambassador spoke for Africa. He said, "The
Security Council should also call for a ceasefire, so that peace can be established according to the
various clauses of the Charter". The Asian Saudi representative then spoke. According to Saudi envoy
Jamil Baroodi, "He called for a meeting of Asian heads of state on the subcontinent to get rid of the
politics of the big powers." After the Saudi delegate, the Soviet representative mentioned a draft
proposal (S/10422, December 5, 1971).
The Soviet Union said a 'ceasefire may be a temporary solution but a permanent one would
need a political accord between India and Pakistan'. The Soviet delegate accused the US and China of
disregarding two major issues for "temporary interests". Pakistan and India spoke in the Security
Council after the Soviet representative. After Pakistan and India spoke, the Council President
informed the Security Council that the Council now has three resolutions: S/10418 (Soviet Union),
S/10421 (China), and S/10423 (8 Nations). S/10417 and S/10419 are no longer before the House since
the same state presented the 8-nation resolution (S/10423), which complements them. The Council
President voted on the Soviet proposal first (UN Doc, S/PV/1607:75-201).
Consequences of the Soviet Union's (S/10418) proposal
In favor of the Soviet
Union
Abstain from voting Against the proposal
Soviet Union
Soviet Union, Poland United States, United Kingdom,
France, Argentina, Belgium,
Burundi, Italy, Japan, Nicaragua,
Sierra Leone, Somalia, Syria
China
The Chinese veto caused the idea to be rejected. The majority of members were not
convinced by this suggestion either. Furthermore, throughout the speech, those who chose not to vote
expressed their opposition to the idea. When the Chinese proposal (S/10421) was put to a vote by the
Council President following the vote on the Soviet proposal, the Chinese delegate stated that they
were still in consultation with other Council members. No vote was held on the Chinese proposal as
China indicated no interest in holding a vote on it. The eight-nation draft proposal, headed by
Argentina, was then put to a vote by the Council President.
2
nd veto of the Soviet Union in favor of Bangladesh in the Security Council (S/10423)
In favor of the 8 nation proposal Abstain from voting Against the US proposal
USA, China, Argentina, Belgium, Burundi,
Italy, Japan, Nicaragua, Sierra Leone,
Somalia, Syria.
United Kingdom, France Soviet Union, Poland
This idea received 11 votes. UK and France refused to vote. Soviet and Polish votes were no.
After the Soviet Union vetoed it again, the eight-nation armistice failed (UN Doc, S/PV/1607: 230-
331). The French delegate called such motions and counter-motions 'presumptive' after the 8 Nations'
resolution voting. After voting on the 8 Nations resolution, the Council President notified the Council
of two further resolutions (S/10421) and (S/10425). The Security Council President exhorted member
nations to find a solution and postponed the discussion until 3.30 pm the next day.
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 17
The Proposals by 8 Nations (S/10423)
In this session of the Security Council, the 8 member states of the Provisional Council (Argentina,
Belgium, Burundi, Italy, Japan, Nicaragua, Sierra Leone and Somalia) led by Argentina put forward a
proposal of three points. The resolution called for a ceasefire and the creation of an environment for
refugee return (UN Doc, S/10423).
The Proposals by 6 Nations (S/10425)
At the 1607th session of the Security Council, six nations—Belgium, Italy, Japan, Nicaragua, Sierra
Leone and Tunisia—proposed another three-point resolution. This proposal statesa. Urges governments to immediately implement a cease-fire.
b. Request the Secretary General to update the Council on the resolution's implementation.
c. The UN Doc, S/10425, recommends continuing to consider methods to restore peace in the
region.
1608th meeting of the Security Council (December 6, 1971)
The 1608th Security Council session was place at 3.30 pm on December 6, 1971. This session, like
the previous ones, allowed India, Pakistan, and Tunisia from Africa and Saudi Arabia from Asia to
debate. I.B. Tarlor Kamara (Sierra Leone) convened this Security Council session (UN Doc,
S/PV/1608:1-5).
Soviet Union Resolution (S/10426)
Soviet delegate offered a new resolution with two revisions to the six-nation draft resolution
(S/10425) early in this session. (In operative paragraph 1, replace ‘the Governments concerned’ with
'all parties concerned' and add 'and cessation of all hostilities').
Peace Proposal Unity Formula (S/10429)
In the wake of Security Council impasse, the 11 member nations discussed bringing the issue to the
General Assembly informally. Following discussions, Argentina, Somalia, Nicaragua, Sierra Leone,
Burundi, and Japan presented a draft resolution (S/10429) to the Security Council, recommending a
special session of the UN General Assembly if permanent members failed to reach consensus at the
1606th and 1607th meetings. This proposal followed the 3 November 1950 General Assembly
decision [377 A (V)]. Many call it 'Unity for Peace Exercise'. Since the UN Security Council is
deadlocked, the General Assembly implements portions of this formula for world peace and security.
Soviet Union Resolution (S/10428)
The Soviet Union introduced another draft resolution late in this session. In a five-point draft
resolution, the USSR urged that "all parties concerned should immediately cease hostilities and
implement a cease-fire." The 1970 elections called for a political solution in Pakistan to cease
hostilities. The UN Secretary-General should execute this decision and continue peace talks in the
area. After briefly discussing the draft resolutions in the Security Council, the President decided to
vote for the Unity Formula for Peace resolution (S/10429) to take initiative because the Soviet and
Chinese resolutions (S/10428) and (S/10421) would fail.
Consequences of the Unity Formula for Peace proposal in the Security Council
In favor of the US proposal Abstain from voting Against the US proposal
USA, China, Argentina, Belgium, Burundi,
Italy, Japan, Nicaragua, Sierra Leone, Somalia,
Syria
*** United Kingdom, France,
Soviet Union, Poland
After this proposal was passed, the United Nations started to implement the Unity Formula
for Peace (UN Doc, S/PV/1608) with the aim of ending the war in the subcontinent. To protect
Pakistan, China took the initiative to send this proposal to the UN General Assembly.
26th (Special Session) of the General Assembly
According to the Security Council's December 6 decision, the 26th extraordinary session of the
General Assembly was convened at the UN on December 7. The 26th Special Session of the General
Assembly saw three proposals:
A. Proposal by 13 nations (A/L/647).
B. The 34-nation Argentine-led plan (A/L/647 Rev.) and Soviet proposal (A/L/646) were detailed.
For 12 hours on December 7, the General Assembly considered 3 draft proposals. debate
included 58 of 131 General Assembly nations.
The Resolution of 13 States to the General Assembly (A/L/647)
Thirteen member states introduced a draft resolution for General Assembly debate at the start of this
session. The 13 states' suggestions mainly included the following:
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 18
a. Urge Pakistan and India to immediately halt hostilities and return their soldiers to their
respective boundaries.
b. Boost attempts to repatriate refugees.
c. (c) The Secretary-General will urge that decisions of the Security Council and the General
Assembly be implemented.
d. In view of the existing resolution (UN Doc, A/L 647), urge the Security Council to respond
appropriately.
The Resolution of 34 States to the General Assembly (A/L/647 Rev-1)
The General Assembly received a draft resolution from 34 governments, chaired by Argentina and
backed by the US, Muslim nations, and China. 'Immediately effective Indo-Pakistani ceasefire and
evacuation of Indian troops from East Pakistan, respecting the concept of the integrity of Pakistan'
was the 34-state resolution's heart.
The Resolution of Soviet Union (A/L/648)
The Soviet Union's proposal states, "Ceasefire may be a temporary solution, but a permanent solution
requires a political agreement between India and Pakistan" (UN Doc, A/L 648).
Countries Participating in the Debate in the Special Session (26th) of the United Nations
General Assembly
Asia Africa Europe Middle and South
America
Others
Bhutan Algeria Albania Argentina Australia
Sri Lanka Burundi Bulgaria Brazil Fiji
China Chad Czechoslovakia Chile New Zealand
Cyprus Gabon Denmark Ecuador United States
India Ghana France Mexico
Indonesia Ivory Coast Greece Nicaragua
Iran Madagascar Italy Peru
Japan Mauritania Netherlands Uruguay
Lebanon Sierra Leone Poland
Malaysia Somalia Portugal
Mongolia Sudan Soviet Union
Nepal Tanzania Sweden
Pakistan Togo Britain
Saudi Arabia Tunisia Yugoslavia
Turkey Hungary
Jordan
Quake
17 country 14 country 15 country 8 country 4 country
Source: Prepared by reviewing various UN documents.
Following deliberation in the General Assembly, the President of the Assembly, Adam Malik
(former Minister of Foreign Affairs of Indonesia), approved the motion put up by 34 nations,
spearheaded by Argentina, for vote in the General Assembly (amended). This decision was made in
accordance with Rule 93 of the Rules of Procedure, which governs the process.
Voting results on 34 state resolutions in the General Assembly
34 in favor of the State proposal Abstain from voting Against the proposal of 34 states
104 states 11 states 16 states
It was supported by 104 nations, Negative vote from 16 nations and 11 nations cast no votes.
General Assembly resolution sent to Security Council for execution same day. UN Under-SecretaryGeneral telegraphed India and Pakistan of the General Assembly's resolution (UNGA Resolution,
2793).
1611th Meeting of the Security Council (December 12, 1971)
While the UN General Assembly adopted the ceasefire resolution, the battle continued and Pakistan
soldiers in Dhaka fell. On December 12, George Bush (Senior) requested a quick ceasefire from the
Secretary General in the Security Council (S/10444). Thus, the 1611th Security Council meeting took
place at 4 p.m. A large delegation from India led by Foreign Minister Sardar Swaran Singh attended
this summit. Pakistan sent a mission led by recently appointed Deputy Prime Minister and Foreign
Minister Zulfiqar Ali Bhutto to boost diplomatic efforts (UN Doc, S/PV/1611).
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 19
The Resolution of the United States (S/10446)
The US proposed a draft resolution to a Security Council emergency meeting on 12 December. The
seven-point resolution demanded 'prompt ceasefire and army withdrawal' (UN Doc, S/10446). In this
Security Council resolution, the US and USSR had opposite stances. The US and China publicly
supported Pakistan. The council president adjourned the meeting at 12.35 pm to meet again the next
day.
1613th meeting of the Security Council (December 13, 1971)
The Security Council had its 1613th session at 3 p.m. on December 13. In addition to Security
Council members, India, Pakistan, Saudi Arabia, and Tunisia attended this meeting. The meeting
opened with US draft resolution (S/10446) talks. The Council president let Poland's representative
speak first. George Bush, US representative, said, India bears the major responsibility for broadening
the crisis by rejecting the UN's efforts to become involved, even in a humanitarian way, in relation to
the refugees, rejecting proposals like our Secretary General's offer of good offices, which could have
defused the crisis, and rejecting proposals that could have started a political dialogue. (UN Doc, A/PV
2002: 130-141).
Chinese envoy Chiao remarked, "India conspires with Bengali refugees like Tibetan
refugees." He called India a "outright aggressor" pursuing South Asian domination. He further said
the Soviet Union is the principal backer of Indian aggression. China wants a ceasefire and the
evacuation of both nations' forces (UN). Doc, A/PV 2002: 141-146).
In his speech, the Soviet Union delegate observed, 'The businesspeople and fanatics who
brought this subject before the General Assembly have blinded their eyes to the true situation in the
Indian subcontinent. They are concealing the major reasons of the dispute without examining the
issue. He dubbed this project China-US Collude. China asserts it uses the forum for anti-Soviet
propaganda (UN Doc, A/PV 2003: 173-185). The President of the Council voted on the United States'
updated draft resolution (S/10446/Rev.1) for Security Council approval after debate. The third veto by
the Soviet Union, a permanent UN Security Council member, reversed the cease-fire resolution (UN
Doc, S/PV/1613: 174).
3
rd veto of the Soviet Union in favor of Bangladesh in the Security Council (S/10446/Rev.1)
In favor of the US proposal Abstain from voting Against the US proposal
USA, China, Argentina, Belgium, Burundi,
Italy, Japan, Nicaragua, Sierra Leone,
Somalia, Syria
United Kingdom, France Soviet Union, Poland
The Proposal by Italy and Japan (S/10451)
After voting on the US proposal, Italy and Japan jointly presented another draft resolution at this
session of the Security Council. There were total of nine points in this proposal. The main point of the
resolution was to 'maintain the national integrity of Pakistan and reach a comprehensive political
solution to this crisis' (UN Doc, S/10451).
The 1614th meeting of the Security Council took place on December 14, 1971.
The 1614th Security Council meeting commenced at 12.10 pm on December 14th. The meeting did not
achieve a consensus. Britain engaged in discussions with other members of the Council, namely
France, in order to develop a new proposal that would meet the approval of all parties involved.
Poland has presented a draft resolution (S/10453) to the President of the Council, outlining a six-point
plan for a ceasefire.
Here, the Security Council meeting system was addressed. After discussing their
recommendations, Britain and Poland requested that the conference be deferred until the next day for
government orders. All Council members agreed, save China's moderate reservations. To permit
formal deliberations on the British-French and Polish proposals, the Council President postponed the
meeting (UN Doc, S/PV/1614: 49).
1615th meeting of the Security Council (December 15, 1971)
The 1615th Security Council meeting was conducted at 7.20 pm on December 15. At the Council
President's request, India and Pakistan delegates attended this meeting. Meeting attendees discussed
four draft suggestions. Polish proposal (UN Doc, S/10453/Rev-1), France and Britain's resolution,
Syria's resolution, and Soviet Union's resolution. Polish proposals included 'ceasefire and departure of
West Pakistani soldiers from East Pakistan'. "Pakistani political prisoners should be released, so that
they can implement their mandate in East Pakistan" declared the Syrian draft resolution. After
negotiations, the UK and France proposed a Syrian-like draft resolution. The concept addresses
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 20
ceasefire in the east and west of the subcontinent individually. The idea called for political settlement
discussions with elected officials. Britain, France, and the Soviet Union made similar proposals. The
Soviet Union demanded a thorough political solution with East Pakistan's elected representatives. A
cease-fire must also be announced (UN Doc, S/PV/1615).
The Chinese representative began with a speech. China's representative said, 'The Security
Council should respect Pakistan's independence, sovereignty, national unity and geographical
integrity' (UN Doc, S/PV/1615: 13). The President of the Council asked the Sri Lankan representative
to speak after the Chinese speaker (UN Doc, S/PV/1615: 13). The Council President invited the Sri
Lankan delegate to speak after the Chinese representative. Sri Lankan representative: "Sri Lanka
seeks a neutral solution. He said, 'This solution should be one where triumph is devoid of difficulties,
loss is without consequence and above all peace prevails' (UN Doc, S/PV/1615: 22). Pakistan's
Deputy Prime Minister and Foreign Minister Zulfikar Ali Bhutto's statement on these suggestions was
spectacular. The Security Council was strongly criticized in his passionate address. He called the
Security Council stage of deceit and farce' He instructed the Security Council to legitimize every
unlawful occurrence until December 15, establish a harsher treaty than Versailles, and legalize the
occupation. We will fight without me. I shall withdraw but fight again. My country calls. Why waste
time on the Security Council? I refuse to participate in such a disgraceful surrender of my nation. He
urged the General and Security Council to remove the ‘monument of failure' He concluded his
Security Council remarks. They rip up draft resolutions of four nations, including Poland, and I go
(UN Doc, S/PV/1615: 84). Pakistani delegates left the Security Council. Pakistani delegates left the
Security Council. Accepting Poland's suggestion (UN Doc, S/10453) may have benefited Pakistan.
India 'although grudgingly' approved the idea with Soviet help. The Pakistani military would not have
surrendered humiliatingly if the delegates had accepted the idea.
The Council President called Poland's proposal timely out of 4 drafts. The Security Council
discussed four draft ideas, but none of the member nations indicated interest in voting. Instead, they
continued to deliberate. Thus, the Council President adjourned the meeting till 10.30 am on December
16 (UN Doc, S/PV/1615:139).
1616th meeting of the Security Council (December 16, 1971)
The 1616th Security Council meeting was conducted at 10:30 am on December 16. The Security
Council President invited Indian Foreign Minister Sardar Swaran Singh, Saudi Ambassador Mr. Jamal
Baroodi, Tunisian representative, and Sri Lankan representative to this meeting. The President stated
that five draft resolutions await decision before the Council: Italy and Japan (S/10451), Poland (UN
Doc, S/10453/Rev-1), Syria (UN Doc, S/10456), France and Britain (UN Doc, S/10455), and the
Soviet Union (UN Doc, S/10457). The Chinese and Soviet draft resolutions (S/10421) and (S/10428)
were not vetoed (UN Doc, S/PV/1616: 3).
Indian External Affairs Minister Sardar Swaran Singh read Indira Gandhi's statement after the
President's opening remarks. This statement included two main points.
a. Pakistani army surrendering in Dhaka created Bangladesh.
b. India's Western Front ceasefire (UN Doc, S/PV/1616:5).
At 1.10 pm, the 1616th Security Council meeting finished.
1617th meeting of the Security Council (December 16, 1971)
The Foreign Minister of India proclaimed the creation of Bangladesh via the surrender of Pakistani
soldiers in Dhaka at 3.00 pm in the 1616th and 1617th Security Council meetings. Besides Security
Council members, India, Pakistan, Tunisia, and Saudi Arabia attended this meeting. A Soviet draft
resolution (S/10458) welcomed India's ceasefire proposal during this conference. Japan and the US
presented a seven-point draft resolution (S/10450) on Geneva Conventions (1949) compliance,
including refugee safe return, during the conference. It then proposed S/10459/Rev.1, revising this
plan. Meeting terminated at 9.45 pm without Security Council resolution (UN Doc, S/PV/1617).
1620th meeting (Final meeting) of the Security Council (December 21, 1971)
The UN Security Council was unable to achieve a compromise despite the increasing tensions in
Bangladesh and the unilateral ceasefire declared by India. Argentina, Burundi, Italy, Japan,
Nicaragua, Sierra Leone, and Somalia together presented Security Council resolution S/10465 on
December 21. The resolution sought to 'monitor a cessation of hostilities and encourage all relevant
parties to comply with the provisions of the Geneva Conventions'. During the plenary session, the
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 21
resolution received support from 13 states, but the Soviet Union and Poland chose not to vote (UN
Doc, S/PV/1620).
Consequences of the provisional 7 state resolution of the Security Council
In favor of the proposal Abstain from voting Against the proposal
United States, China, United Kingdom, France,
Argentina, Belgium, Burundi, Italy, Japan,
Nicaragua, Sierra Leone,
Somalia, Syria
Soviet Union, Poland ***
The Security Council eventually approved the ceasefire. The eventful 26th (special) General
Assembly session ended on 22 December after the Security Council passed the resolution. Bangladesh
attained independence without UN assistance.
Conclusion
The Bengali liberation war with Pakistani forces in besieged Bangladesh lasted from March 26 to
December 16, 1971. The UN did nothing to address genocide and human rights in East Pakistan
during the Liberation War. Due to its dependency on the US, the UN could not address East Pakistan's
genocide and human rights abuses. The UN's good contribution in alleviating refugees' immediate
concerns in India has always been noted. The UN's greatest refugee aid effort in Bangladesh occurred
in 1971. At the time, the UN did not prioritize political issues in establishing a lasting refugee
solution. Major nations preferred geopolitical and national solutions outside the UN. Bangladesh has
not been resolved by the UN Security and General Assembly. The US and China had a 'leaning
strategy' toward Pakistan and the USSR toward India. The Soviet Union's veto has frequently
thwarted China-US Security Council efforts to unify Pakistan and prevent Bangladesh's accession.
Pakistan's statehood was supported by 104–11 votes in the UN General Assembly's Bangladesh
resolution. The vote supported national integration (United Pakistan) in 1971. However, superpowers
like France and Britain remained neutral, helping Bangladesh gain independence. Bangladesh became
independent on December 21, 1971, when the Security Council passed an anti-war resolution
(S/10465) without UN involvement.
References
Ayoob, M. (1972). The United Nations and the India-Pakistan Conflict. Asian Survey, 12(11), 977-
988. https://doi.org/10.2307/2642776
Azad, A. K. (2013). Bangladesh: From Nationhood to Security State. International Journal of Asian
Social Science, 3(7), 1516-1529.
Bina, D. (2011). The Role of External Powers in Bangladesh's Liberation War. Journal of South Asian
and Middle Eastern Studies, 35(2), 27-42.
Hossain, K. (2014). International Legal Aspects of the Bangladesh Liberation War of 1971. Journal of
Asian and African Studies, 49(5), 613-628. https://doi.org/10.1177/0021909613490131
Islam, S. M. (2012). The United Nations and the Bangladesh Crisis of 1971: A Legal Perspective.
Asian Journal of International Law, 2(2), 401-421. https://doi.org/10.1017/S2044251312000
172
Mookherjee, N. (2011). The Bangladesh Genocide: The Plight of Women during the 1971 Liberation
War. Gender, Technology and Development, 15(1), 101-114. https://doi.org/10.1177/097185
241001500105
Raghavan, S. (2013). 1971: A Global History of the Creation of Bangladesh. Harvard University
Press.
Sisson, R., & Rose, L. E. (1991). War and Secession: Pakistan, India, and the Creation of Bangladesh.
University of California Press.
Sobhan, R. (1982). The Crisis of External Dependence: The Political Economy of Foreign Aid to
Bangladesh. University Press Limited.
Tahmina, Q. (2001). The UN and the Bangladesh Liberation War of 1971: Interventions and
Consequences. Journal of International Affairs, 55(2), 453-469.UN Doc, S/10410, Para 6-10.
UN Doc, S/PV/1606, Para 1-371, 5 December, 1971.
UN Doc, S/10416, 4 December, 1971.
UN Doc, S/10417, 4 December, 1971.
UN Doc, S/10418, 4 December, 1971.
UN Doc, S/10419, 4 December, 1971.
UN Doc, S/PV/1607, Para 1-234, 5 December, 1971.
International Journal of Politics & Social Sciences Review (IJPSSR)………………………………Vol. 3, Issue I, 2024
The United Nations' Involvement in Bangladesh's Liberation War ------Mamun, Hasan & Amin
Page | 22
UN Doc, S/10421, 5 December, 1971.
UN Doc, S/10423, 5 December, 1971.
UN Doc, S/10425, 5 December, 1971.
UN Doc, S/PV/1608, Para 1-187, 6 December, 1971.
UN Doc, S/10426, 6 December, 1971.
UN Doc, S/10428, 6 December, 1971.
UN Doc, S/10429, 6 December, 1971.
UN Doc, A/L 647, 7 December 1971.
UN Doc, A/L 647/Rev-1, 7 December 1971.
UN Doc, A/L 648, 7 December 1971.
UN General Assembly Resolution 2793, Vol- XXVI.
UN Doc, S/PV/1611, 12 December 1971.
UN Doc, S/10446, 12 December 1971.
UN Doc, S/PV/1613, Para1-174, 13 December 1971.
UN Doc, A/PV 2002, PP.130-146.
UN Doc, A/PV 2003, PP.173-185.
UN Doc, S/10451, 13 December 1971.
UN Doc, S/PV/1614, Para1-49, 14 December 1971.
View publication stats
What actions did the UN Secretary General, U Thant, take in response to the trial of Sheikh Mujibur Rahman in August 1971? |
You may only respond using the context block provided. | Is the United States currently in a recession? | There is no theoretical reason why the criteria used in the Sahm rule is associated with a recession—it is
an observed historical relationship for a small sample and may not always hold going forward. Sahm
herself has indicated that despite her rule getting triggered, she does not believe that the United States is
currently in a recession, although she believes that the risk of recession has increased.
The primary indicators used by the NBER are not currently consistent with a recession, and several
remain strong. For example, real gross domestic product has been positive since the third quarter of 2022
and grew by 1.4% and 2.8% in the first and second quarters of 2024, with real personal consumption expenditures up 1.5% and 2.3% over the same period. Real personal income less transfers grew in May
and June 2024 and were up 1.8% over the year in June.
Thus far, the only indications of a weakening economy are coming from the labor market, and even there,
indicators are inconsistent. Although there has been a 0.9 percentage point increase in the unemployment
rate and nonfarm payroll employment growth has slowed, employment growth remained positive, which
is inconsistent with a recession. (Recessions typically feature falling employment within the first three
months.) Employment as measured by a different survey has shown some decreases, but the NBER does
not track this measure as closely.
The unemployment rate could be rising for reasons associated with a weakening economy (e.g., workers
losing their jobs) or for neutral reasons (e.g., new entrants to the labor force). Data on the reasons for
unemployment suggest that the unemployment rate has risen at least partly because the economy has
weakened. Almost two-thirds of the increase in unemployment in the past year has come from people who
have lost their jobs (mostly via temporary layoffs or jobs ending), whereas around one-third has come
from people entering or reentering the labor force. On the other hand, the rise in unemployment has not
coincided with a rise in layoffs and discharges—which are still lower than during the expansion that
preceded the pandemic—as would be expected if the economy were entering a recession. Additionally,
many economists assessed that the unemployment rate was unsustainably low for over two years. Some
cooling in the labor market could indicate a rise to a more sustainable rate. Now the key question is
whether it will continue to rise. Unemployment remains low by historical standards, and if it does not rise
much further, a recession can be avoided.
| Using only the context block provided is the United States in a recession? |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | Can you explain the new funding approved for passenger rail in the Investing in America Agenda? What were some of the passenger corridors approved for funding? Answer in a minimum of 300 words. | President Biden’s Investing in America Agenda – a key pillar of Bidenomics – is delivering world class-infrastructure across the country, expanding access to economic opportunity, and creating good-paying jobs. By delivering $66 billion from the Bipartisan Infrastructure Law – the largest investment in passenger rail since the creation of Amtrak 50 years ago – President Biden is delivering on his vision to rebuild America and win the global competition for the 21st century.
Today, the Biden-Harris Administration is announcing $8.2 billion in new funding for 10 major passenger rail projects across the country, including the first world-class high-speed rail projects in our country’s history. Key selected projects include: building a new high-speed rail system between California and Nevada, which will serve more than 11 million passengers annually; creating a high-speed rail line through California’s Central Valley to ultimately link Los Angeles and San Francisco, supporting travel with speeds up to 220 mph; delivering significant upgrades to frequently-traveled rail corridors in Virginia, North Carolina, and the District of Columbia; and upgrading and expanding capacity at Chicago Union Station in Illinois, one of the nation’s busiest rail hubs. These historic projects will create tens of thousands of good-paying, union jobs, unlock economic opportunity for communities across the country, and open up safe, comfortable, and climate-friendly travel options to get people to their destinations in a fraction of the time it takes to drive.
The Biden-Harris Administration is building out a pipeline of passenger rail projects in every region of the country in order to achieve the President’s vision of world-class passenger rail. Announced projects will add new passenger rail service to cities that have historically lacked access to America’s rail network, connecting residents to jobs, healthcare, and educational opportunities. Investments will repair aging rail infrastructure to increase train speeds, reduce delays, benefit freight rail supply chains to boost America’s economy, significantly reduce greenhouse emissions, and create good-paying union jobs. Additionally, electric high-speed rail trains will take millions of cars off the roads and reduce emissions, further cementing intercity rail as an environmentally-friendly alternative to flying or driving and saving time for millions of Americans. These investments will also create tens of thousands of good-paying union jobs in construction and related industries – adding to over 100,000 jobs that the President is creating through historic investments in world-class rail.
Today’s investment includes $8.2 billion through the Federal Railroad Administration’s Federal-State Partnership for Intercity Passenger Rail Program, as well as $34.5 million through the Corridor Identification and Development program to guide passenger rail development on 69 rail corridors across 44 states, ensuring that intercity rail projects are ready for implementation. President Biden will travel to Las Vegas, Nevada to make this announcement.
To date, President Biden has announced $30 billion for rail projects across the country – including $16.4 billion on the Northeast Corridor, $1.4 billion for passenger rail and freight rail safety projects, and $570 million to upgrade or mitigate railroad crossings.
Fed-State National Project selections include:
The Brightline West High-Speed Intercity Passenger Rail System Project will receive up to $3 billion for a new 218-mile intercity passenger rail system between Las Vegas, Nevada, and Rancho Cucamonga, California. The project will create a new high-speed rail system, resulting in trip times of just over 2 hours – nearly twice as fast as driving. This route is expected to serve more than 11 million passengers annually, taking millions of cars off the road and, thanks to all-electric train sets, removing an estimated 400,000 tons of carbon dioxide per year. This project will create 35,000 jobs supporting construction and support 1,000 permanent jobs in operations and maintenance once in service. Brightline’s agreement with the California State and Southern Nevada Building Trades will ensure that this project is built with good-paying union labor, and the project has reached a separate agreement with Rail Labor to employ union workers for its ongoing operations and maintenance. The project will also allow for connections to the Los Angeles Metro area via the Metrolink commuter rail system.
The California Inaugural High-Speed Rail Service Project will receive up to $3.07 billion to help deliver high-speed rail service in California’s Central Valley by designing and extending the rail line between Bakersfield and Merced, procuring new high-speed trainsets, and constructing the Fresno station, which will connect communities to urban centers in Northern and Southern California. This 171-mile rail corridor will support high-speed travel with speeds up to 220mph. The project will improve connectivity and increase travel options, along with providing more frequent passenger rail service, from the Central Valley to urban centers in northern and Southern California. New all-electric trainsets will produce zero emissions and be powered by 100% renewable energy. By separating passenger and freight lines, this project will benefit freight rail operations throughout California as well. This project has already created over 11,000 good-paying union construction jobs and has committed to using union labor for operations and maintenance.
The Raleigh to Richmond (R2R) Innovating Rail Program Phases IA and II project will receive up to $1.1 billion to build approximately additional parts of the Southeast Corridor from Raleigh to Wake Forest, North Carolina, including new and upgraded track, eleven grade separations and closure of multiple at-grade crossings. The investment will improve system and service performance by developing a resilient and reliable passenger rail route that will also contribute to freight and supply chain resiliency in the southeastern U.S. The proposed project is part of a multi-phased effort to develop a new passenger rail route between Raleigh, North Carolina, and Richmond, Virginia, and better connect the southern states to DC and the Northeast Corridor. Once completed, this new route will save passengers an estimated 90 minutes per trip.
The Long Bridge project, part of the Transforming Rail in Virginia – Phase II program, will receive $729 million to construct a new two-track rail bridge over the Potomac River to expand passenger rail capacity between Washington, D.C. and Richmond, VA. Nearly 6 million passengers travel over the existing bridge every year on Amtrak and Virginia Railway Express lines. This upgrade will reduce congestion and delays on this heavily-traveled corridor to our nation’s capital.
As part of President Biden’s vision for world-class passenger rail, the Administration is planning for future rail growth in new and unprecedented ways through the Bipartisan Infrastructure Law-created Corridor ID Program. The program establishes a new planning framework for future investments, and corridor selections announced today stand to upgrade 15 existing rail routes, establish 47 extensions to existing and new conventional corridor routes, and advance 7 new high-speed rail projects, creating a pipeline of intercity passenger rail projects ready for future investment.
Project selections include:
Scranton to New York, reviving a dormant rail corridor between Pennsylvania, New Jersey, and New York, to provide up to three daily trips for commuters and other passengers;
Colorado Front Range, a new rail corridor connecting Fort Collins, CO, and Pueblo, CO, to serve an area that currently has no passenger rail options;
The Northern Lights Express, connecting Minneapolis, MN and Duluth, MN, with several stops in Wisconsin, for greater regional connectivity;
Cascadia High-Speed Rail, a proposed new high-speed rail corridor linking Oregon, Washington, and Vancouver, with entirely new service;
Charlotte to Atlanta, a new high-speed rail corridor linking the Southeast and providing connection to Hartsfield-Jackson Airport, the busiest airport in the world; | [question]
Can you explain the new funding approved for passenger rail in the Investing in America Agenda? What were some of the passenger corridors approved for funding? Answer in a minimum of 300 words.
=====================
[text]
President Biden’s Investing in America Agenda – a key pillar of Bidenomics – is delivering world class-infrastructure across the country, expanding access to economic opportunity, and creating good-paying jobs. By delivering $66 billion from the Bipartisan Infrastructure Law – the largest investment in passenger rail since the creation of Amtrak 50 years ago – President Biden is delivering on his vision to rebuild America and win the global competition for the 21st century.
Today, the Biden-Harris Administration is announcing $8.2 billion in new funding for 10 major passenger rail projects across the country, including the first world-class high-speed rail projects in our country’s history. Key selected projects include: building a new high-speed rail system between California and Nevada, which will serve more than 11 million passengers annually; creating a high-speed rail line through California’s Central Valley to ultimately link Los Angeles and San Francisco, supporting travel with speeds up to 220 mph; delivering significant upgrades to frequently-traveled rail corridors in Virginia, North Carolina, and the District of Columbia; and upgrading and expanding capacity at Chicago Union Station in Illinois, one of the nation’s busiest rail hubs. These historic projects will create tens of thousands of good-paying, union jobs, unlock economic opportunity for communities across the country, and open up safe, comfortable, and climate-friendly travel options to get people to their destinations in a fraction of the time it takes to drive.
The Biden-Harris Administration is building out a pipeline of passenger rail projects in every region of the country in order to achieve the President’s vision of world-class passenger rail. Announced projects will add new passenger rail service to cities that have historically lacked access to America’s rail network, connecting residents to jobs, healthcare, and educational opportunities. Investments will repair aging rail infrastructure to increase train speeds, reduce delays, benefit freight rail supply chains to boost America’s economy, significantly reduce greenhouse emissions, and create good-paying union jobs. Additionally, electric high-speed rail trains will take millions of cars off the roads and reduce emissions, further cementing intercity rail as an environmentally-friendly alternative to flying or driving and saving time for millions of Americans. These investments will also create tens of thousands of good-paying union jobs in construction and related industries – adding to over 100,000 jobs that the President is creating through historic investments in world-class rail.
Today’s investment includes $8.2 billion through the Federal Railroad Administration’s Federal-State Partnership for Intercity Passenger Rail Program, as well as $34.5 million through the Corridor Identification and Development program to guide passenger rail development on 69 rail corridors across 44 states, ensuring that intercity rail projects are ready for implementation. President Biden will travel to Las Vegas, Nevada to make this announcement.
To date, President Biden has announced $30 billion for rail projects across the country – including $16.4 billion on the Northeast Corridor, $1.4 billion for passenger rail and freight rail safety projects, and $570 million to upgrade or mitigate railroad crossings.
Fed-State National Project selections include:
The Brightline West High-Speed Intercity Passenger Rail System Project will receive up to $3 billion for a new 218-mile intercity passenger rail system between Las Vegas, Nevada, and Rancho Cucamonga, California. The project will create a new high-speed rail system, resulting in trip times of just over 2 hours – nearly twice as fast as driving. This route is expected to serve more than 11 million passengers annually, taking millions of cars off the road and, thanks to all-electric train sets, removing an estimated 400,000 tons of carbon dioxide per year. This project will create 35,000 jobs supporting construction and support 1,000 permanent jobs in operations and maintenance once in service. Brightline’s agreement with the California State and Southern Nevada Building Trades will ensure that this project is built with good-paying union labor, and the project has reached a separate agreement with Rail Labor to employ union workers for its ongoing operations and maintenance. The project will also allow for connections to the Los Angeles Metro area via the Metrolink commuter rail system.
The California Inaugural High-Speed Rail Service Project will receive up to $3.07 billion to help deliver high-speed rail service in California’s Central Valley by designing and extending the rail line between Bakersfield and Merced, procuring new high-speed trainsets, and constructing the Fresno station, which will connect communities to urban centers in Northern and Southern California. This 171-mile rail corridor will support high-speed travel with speeds up to 220mph. The project will improve connectivity and increase travel options, along with providing more frequent passenger rail service, from the Central Valley to urban centers in northern and Southern California. New all-electric trainsets will produce zero emissions and be powered by 100% renewable energy. By separating passenger and freight lines, this project will benefit freight rail operations throughout California as well. This project has already created over 11,000 good-paying union construction jobs and has committed to using union labor for operations and maintenance.
The Raleigh to Richmond (R2R) Innovating Rail Program Phases IA and II project will receive up to $1.1 billion to build approximately additional parts of the Southeast Corridor from Raleigh to Wake Forest, North Carolina, including new and upgraded track, eleven grade separations and closure of multiple at-grade crossings. The investment will improve system and service performance by developing a resilient and reliable passenger rail route that will also contribute to freight and supply chain resiliency in the southeastern U.S. The proposed project is part of a multi-phased effort to develop a new passenger rail route between Raleigh, North Carolina, and Richmond, Virginia, and better connect the southern states to DC and the Northeast Corridor. Once completed, this new route will save passengers an estimated 90 minutes per trip.
The Long Bridge project, part of the Transforming Rail in Virginia – Phase II program, will receive $729 million to construct a new two-track rail bridge over the Potomac River to expand passenger rail capacity between Washington, D.C. and Richmond, VA. Nearly 6 million passengers travel over the existing bridge every year on Amtrak and Virginia Railway Express lines. This upgrade will reduce congestion and delays on this heavily-traveled corridor to our nation’s capital.
As part of President Biden’s vision for world-class passenger rail, the Administration is planning for future rail growth in new and unprecedented ways through the Bipartisan Infrastructure Law-created Corridor ID Program. The program establishes a new planning framework for future investments, and corridor selections announced today stand to upgrade 15 existing rail routes, establish 47 extensions to existing and new conventional corridor routes, and advance 7 new high-speed rail projects, creating a pipeline of intercity passenger rail projects ready for future investment.
Project selections include:
Scranton to New York, reviving a dormant rail corridor between Pennsylvania, New Jersey, and New York, to provide up to three daily trips for commuters and other passengers;
Colorado Front Range, a new rail corridor connecting Fort Collins, CO, and Pueblo, CO, to serve an area that currently has no passenger rail options;
The Northern Lights Express, connecting Minneapolis, MN and Duluth, MN, with several stops in Wisconsin, for greater regional connectivity;
Cascadia High-Speed Rail, a proposed new high-speed rail corridor linking Oregon, Washington, and Vancouver, with entirely new service;
Charlotte to Atlanta, a new high-speed rail corridor linking the Southeast and providing connection to Hartsfield-Jackson Airport, the busiest airport in the world;
https://www.whitehouse.gov/briefing-room/statements-releases/2023/12/08/fact-sheet-president-biden-announces-billions-to-deliver-world-class-high-speed-rail-and-launch-new-passenger-rail-corridors-across-the-country/
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
You must only respond to the prompt using information in the context block and no other sources. | How do licenses negotiated for theaters carry over to television? | The PROs
As mentioned above, although musical compositions were expressly made subject to
copyright protection starting in 1831, Congress did not grant music creators the
exclusive right to publicly perform their compositions until 1897.108 Though this right
represented a new way for copyright owners to derive profit from their musical works,
the sheer number and fleeting nature of public performances made it impossible for
copyright owners to individually negotiate with each user for every use, or detect every
case of infringement.109 ASCAP was established in 1914, followed by other PROs, to address the logistical issue of how to license and collect payment for the public
performance of musical works in a wide range of settings.110
Today, the PROs provide various different types of licenses depending upon the nature
of the use. Anyone who publicly performs a musical work may obtain a license from a
PRO, including terrestrial, satellite and internet radio stations, broadcast and cable
television stations, online services, bars, restaurants, live performance venues, and
commercial establishments that play background music.
Most commonly, licensees obtain a blanket license, which allows the licensee to publicly
perform any of the musical works in a PRO’s repertoire for a flat fee or a percentage of
total revenues.111 Some users opt for a blanket license due to its broad coverage of
musical works and relative simplicity as compared to other types of licenses. Large
commercial establishments such as bars, restaurants, concert venues, stores, and hotels
often enter into blanket licenses to cover their uses, paying either a percentage of gross
revenues or an annual flat fee, depending on the establishment and the type and amount
of use.112 Terrestrial radio stations obtain blanket licenses from PROs as well, usually by
means of the RMLC.113 Many television stations, through the TMLC, also obtain blanket
licenses.114
Less commonly used licenses include the per-program or per-segment license, which
allows the licensee to publicly perform any of the musical works in the PRO’s repertoire
for specified programs or parts of their programming, in exchange for a flat fee or a
percentage of that program’s advertising revenue.115 Unlike a blanket license, the perprogram or per-segment license requires more detailed reporting information, including
program titles, the specific music selections used, and usage dates, making the license
more burdensome for the licensee to administer.116
Users can also license music directly from music publishers through a direct license or a
source license. A direct license is simply a license agreement directly negotiated between the copyright owner and the user who intends to publicly perform the musical
work. Source licenses are commonly used in the motion picture industry, because the
PROs are prohibited from licensing public performance rights directly to movie theater
owners.117 Instead, film producers license public performance rights for the music used
in films at the same time as the synchronization rights, and pass the performance rights
along to the theaters that will be showing their films.118 In the context of motion
pictures, source licenses do not typically encompass non-theatrical performances, such
as on television. Thus, television stations, cable companies, and online services such as
Netflix and Hulu must obtain public performance licenses from the PROs to cover the
public performance of musical works in the shows and movies they transmit to end
users.119 | System instruction: You must only respond to the prompt using information in the context block and no other sources.
Prompt: How do licenses negotiated for theaters carry over to television?
Context block: The PROs
As mentioned above, although musical compositions were expressly made subject to
copyright protection starting in 1831, Congress did not grant music creators the
exclusive right to publicly perform their compositions until 1897.108 Though this right
represented a new way for copyright owners to derive profit from their musical works,
the sheer number and fleeting nature of public performances made it impossible for
copyright owners to individually negotiate with each user for every use, or detect every
case of infringement.109 ASCAP was established in 1914, followed by other PROs, to address the logistical issue of how to license and collect payment for the public
performance of musical works in a wide range of settings.110
Today, the PROs provide various different types of licenses depending upon the nature
of the use. Anyone who publicly performs a musical work may obtain a license from a
PRO, including terrestrial, satellite and internet radio stations, broadcast and cable
television stations, online services, bars, restaurants, live performance venues, and
commercial establishments that play background music.
Most commonly, licensees obtain a blanket license, which allows the licensee to publicly
perform any of the musical works in a PRO’s repertoire for a flat fee or a percentage of
total revenues.111 Some users opt for a blanket license due to its broad coverage of
musical works and relative simplicity as compared to other types of licenses. Large
commercial establishments such as bars, restaurants, concert venues, stores, and hotels
often enter into blanket licenses to cover their uses, paying either a percentage of gross
revenues or an annual flat fee, depending on the establishment and the type and amount
of use.112 Terrestrial radio stations obtain blanket licenses from PROs as well, usually by
means of the RMLC.113 Many television stations, through the TMLC, also obtain blanket
licenses.114
Less commonly used licenses include the per-program or per-segment license, which
allows the licensee to publicly perform any of the musical works in the PRO’s repertoire
for specified programs or parts of their programming, in exchange for a flat fee or a
percentage of that program’s advertising revenue.115 Unlike a blanket license, the perprogram or per-segment license requires more detailed reporting information, including
program titles, the specific music selections used, and usage dates, making the license
more burdensome for the licensee to administer.116
Users can also license music directly from music publishers through a direct license or a
source license. A direct license is simply a license agreement directly negotiated between the copyright owner and the user who intends to publicly perform the musical
work. Source licenses are commonly used in the motion picture industry, because the
PROs are prohibited from licensing public performance rights directly to movie theater
owners.117 Instead, film producers license public performance rights for the music used
in films at the same time as the synchronization rights, and pass the performance rights
along to the theaters that will be showing their films.118 In the context of motion
pictures, source licenses do not typically encompass non-theatrical performances, such
as on television. Thus, television stations, cable companies, and online services such as
Netflix and Hulu must obtain public performance licenses from the PROs to cover the
public performance of musical works in the shows and movies they transmit to end
users.119
|
I'm providing you with your source material. You will not be using any outside material. Your job is to answer questions about the material. | What are the key points of this paper? | ORIGINAL RESEARCH
published: 06 May 2021
doi: 10.3389/fpsyg.2021.637929
Revisiting False-Positive and
Imitated Dissociative Identity
Disorder
Igor Jacob Pietkiewicz* , Anna Bańbura-Nowak, Radosław Tomalski and Suzette Boon
Research Centre for Trauma & Dissociation, SWPS University of Social Sciences and Humanities, Katowice, Poland
Edited by:
Hamed Ekhtiari,
Laureate Institute for Brain Research,
United States
Reviewed by:
Hosein Mohaddes Ardabili,
Mashhad University of Medical
Sciences, Iran
Bo Bach,
Psychiatry Region Zealand, Denmark
*Correspondence:
Igor Jacob Pietkiewicz
[email protected]
Specialty section:
This article was submitted to
Psychopathology,
a section of the journal
Frontiers in Psychology
Received: 04 December 2020
Accepted: 14 April 2021
Published: 06 May 2021
Citation:
Pietkiewicz IJ, Bańbura-Nowak A,
Tomalski R and Boon S (2021)
Revisiting False-Positive and Imitated
Dissociative Identity Disorder.
Front. Psychol. 12:637929.
doi: 10.3389/fpsyg.2021.637929
ICD-10 and DSM-5 do not provide clear diagnosing guidelines for DID, making it
difficult to distinguish ‘genuine’ DID from imitated or false-positive cases. This study
explores meaning which patients with false-positive or imitated DID attributed to their
diagnosis. 85 people who reported elevated levels of dissociative symptoms in SDQ20 participated in clinical assessment using the Trauma and Dissociation Symptoms
Interview, followed by a psychiatric interview. The recordings of six women, whose
earlier DID diagnosis was disconfirmed, were transcribed and subjected to interpretative
phenomenological analysis. Five main themes were identified: (1) endorsement and
identification with the diagnosis. (2) The notion of dissociative parts justifies identity
confusion and conflicting ego-states. (3) Gaining knowledge about DID affects the
clinical presentation. (4) Fragmented personality becomes an important discussion
topic with others. (5) Ruling out DID leads to disappointment or anger. To avoid
misdiagnoses, clinicians should receive more systematic training in the assessment
of dissociative disorders, enabling them to better understand subtle differences in the
quality of symptoms and how dissociative and non-dissociative patients report them.
This would lead to a better understanding of how patients with and without a dissociative
disorder report core dissociative symptoms. Some guidelines for a differential diagnosis
are provided.
Keywords: dissociative identity disorder (DID), false-positive cases, personality disorder, dissociation, differential
diagnosis
INTRODUCTION
Multiple Personality Disorder (MPD) was first introduced in DSM-III in 1980 and re-named
Dissociative Identity Disorder (DID) in subsequent editions of the diagnostic manual (American
Psychiatric Association, 2013). Table 1 shows diagnostic criteria of this disorder in ICD-10, ICD11, and DSM-5. Some healthcare providers perceive it as fairly uncommon or associated with
temporary trends (Brand et al., 2016). Even its description in ICD-10 (World Health Organization,
1993) starts with: “This disorder is rare, and controversy exists about the extent to which it is
iatrogenic or culture-specific” (p. 160). Yet, according to the guidelines of the International Society
for the Study of Trauma and Dissociation (International Society for the Study of Trauma and
Dissociation, 2011), the prevalence of DID in the general population is estimated between 1 and
3%. The review of global studies on DID in clinical settings by Sar (2011) shows the rate from
Frontiers in Psychology | www.frontiersin.org
1
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
TABLE 1 | Diagnostic criteria for dissociative identity disorder.
ICD-10 Multiple personality disorder F44.81
(A) Two or more distinct personalities exist within the individual, only one being evident at a time.
(B) Each personality has its own memories, preferences, and behavior patterns, and at some time (and recurrently) takes full control of the individual’s behavior.
(C) There is inability to recall important personal information which is too extensive to be explained by ordinary forgetfulness.
(D) The symptoms are not due to organic mental disorders (F00–F09) (e.g., in epileptic disorders) or to psychoactive substance-related disorders (F10–F19)
(e.g.,
intoxication or withdrawal).
ICD-11 Dissociative identity disorder 6B64
Dissociative identity disorder is characterized by disruption of identity in which there are two or more distinct personality states (dissociative identities) associated with
marked discontinuities in the sense of self and agency. Each personality state includes its own pattern of experiencing, perceiving, conceiving, and relating to self, the
body, and the environment. At least two distinct personality states recurrently take executive control of the individual’s consciousness and functioning in interacting with
others or with the environment, such as in the performance of specific aspects of daily life such as parenting, or work, or in response to specific situations (e.g., those
that are perceived as threatening). Changes in personality state are accompanied by related alterations in sensation, perception, affect, cognition, memory, motor
control, and behavior. There are typically episodes of amnesia, which may be severe. The symptoms are not better explained by another mental, behavioral or
neurodevelopmental disorder and are not due to the direct effects of a substance or medication on the central nervous system, including withdrawal effects, and are not
due to a disease of the nervous system or a sleep-wake disorder. The symptoms result in significant impairment in personal, family, social, educational, occupational, or
other important areas of functioning.
DSM-5 Dissociative identity disorder 300.14
(A) Disruption of identity characterized by two or more distinct personality states, which may be described in some cultures as an experience of possession. The
disruption in identity involves marked discontinuity in sense of self and sense of agency accompanied by related alterations in affect, behavior, consciousness,
memory, perception, cognition, and/or sensory-motor functioning. These signs and symptoms may be observed by others or reported by the individual.
(B) Recurrent gaps in the recall of everyday events, important personal information, and/or traumatic events that are inconsistent with ordinary forgetting.
(C) The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.
(D) The disturbance is not a normal part of a broadly accepted cultural or religious practice. Note: In children, the symptoms are not better explained by imaginary
playmates or other fantasy play.
(E) The symptoms are not attributable to the physiological effects of a substance (e.g., blackouts or chaotic behavior during alcohol intoxication) or another medical
condition (e.g., complex partial seizures).
a false positive diagnosis, which is unfavorable for the patient,
because using treatment developed for DID with patients
without autonomous dissociative parts may be inefficient or even
reinforce their pathology.
Authors who wrote about patients inappropriately diagnosed
with this disorder used terms such as ‘malingering’ or ‘factitious’
DID (Coons and Milstein, 1994; Thomas, 2001). According
to Draijer and Boon (1999), both labels imply that patients
intentionally simulate symptoms, either for external gains
(financial benefits or justification for one’s actions in court) or
for other forms of gratification (e.g., interest from others), while
in many cases their motivation is not fully conscious. Getting
a DID diagnosis can also provide structure for inner chaos and
incomprehensible experiences, and be associated with hope and
belief it is real. On the other hand, diagnostic errors often result
in inappropriate treatment plans and procedures.
Already in 1995 Boon and Draijer stressed that a growing
number of people self-diagnosed themselves based on
information from literature and the Internet, and reported
symptoms by the book during psychiatric or psychological
assessment. Based on their observation of 36 patients in whom
DID had been ruled out after applying the structured clinical
interview SCID-D, these clinicians identified differences between
genuine and imitated DID. They classified their participants into
three groups: (1) borderline personality disorder, (2) histrionic
personality disorder, or (3) persons with severe dissociative
symptoms but not DID. Participants in that study reported
symptoms similar to DID patients, including: amnesia (but only
for unacceptable behavior), depersonalisation, derealisation,
identity confusion, and identity alteration. However, they
presented themselves and interacted with the therapist in very
0.4 to 14%. However, in studies using clinical diagnostic
interviews among psychiatric in-patients, and in European
studies these numbers were lower (Friedl et al., 2000). The
discrepancies apparently depend on the sample, the methodology
and diagnostic interviews used by researchers.
Diagnosing complex dissociative disorders (DID or Other
Specified Dissociative Disorder, OSDD) is challenging for several
reasons. Firstly, patients present a lot of avoidance and rarely
report dissociative symptoms spontaneously without direct
questioning (Boon and Draijer, 1993; International Society for
the Study of Trauma and Dissociation, 2011; Dorahy et al.,
2014). In addition, standard mental state examination does not
include these symptoms and healthcare professionals do not
receive appropriate training in diagnosing dissociative disorders
(Leonard et al., 2005). Secondly, complex dissociative disorders
are polysymptomatic, and specialists would rather diagnose these
patients with disorders more familiar to them from clinical
practice, e.g., anxiety disorders, eating disorders, schizophrenia,
or borderline personality disorder (Boon and Draijer, 1995; Dell,
2006; Brand et al., 2016). For these reasons, complex dissociative
disorders are underdiagnosed and often mis-diagnosed. For
example, 26.5–40.8% of DID patients would already have been
diagnosed and treated for schizophrenia (Putnam et al., 1986;
Ross et al., 1989). On the other hand, because there is so much
information about DID in the media (Hollywood productions,
interviews and testimonies published on YouTube, blogs), people
who are confused about themselves and try to find an accurate
diagnosis for themselves may learn about DID symptoms on the
Internet, identify themselves with the disorder, and later (even
unintentionally) report core symptoms in a very convincing
way (Draijer and Boon, 1999). This presents a risk of making
Frontiers in Psychology | www.frontiersin.org
2
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
different ways. While DID patients are usually reluctant to
talk about their symptoms and experience their intrusions as
shameful, people who imitated DID were eager to present their
problems, sometimes in an exaggerated way, in an attempt to
convince the clinician that they suffered from DID (Boon and
Draijer, 1995; Draijer and Boon, 1999). Similar observations
were expressed by Thomas (2001) saying that people with
imitated DID can present their history chronologically, using
the first person even when they are highly distressed or allegedly
presenting an altered personality, and are comfortable with
disclosing information about experiences of abuse. They can
talk about intrusions of dissociative parts, hearing voices or
difficulties controlling emotions, without shame.
Unfortunately, ICD-10, ICD-11, and DSM-5 offer no specific
guidelines on how to differentiate patients with personality
disorders and dissociative disorders by the manner in which
they report symptoms. There are also limited instruments to
distinguish between false-positive and false-negative DID. From
the clinical perspective, it is also crucial to understand the motives
for being diagnosed with DID, and disappointment when this
diagnosis is disconfirmed. Accurate assessment can contribute to
developing appropriate psychotherapeutic procedures (Boon and
Draijer, 1995; Draijer and Boon, 1999). Apart from observations
already referred to earlier in this article, there are no qualitative
analyses of false-positive DID cases in the past 20 years.
Most research was quantitative and compared DID patients
and simulators in terms of cognitive functions (Boysen and
VanBergen, 2014). This interpretative phenomenological analysis
is an idiographic study which explores personal experiences and
meaning attributed to conflicting emotions and behaviors in
six women who had previously been diagnosed with DID and
referred to the Research Centre for Trauma and Dissociation for
re-evaluation. It explores how they came to believe they have DID
and what had led clinicians to assume that these patients could be
suffering from this disorder.
Procedure
This study is part of a larger project examining alterations
in consciousness and dissociative symptoms in clinical and
non-clinical groups, held at the Research Centre for Trauma
& Dissociation, financed by the National Science Centre, and
approved by the Ethical Review Board at the SWPS University
of Social Sciences & Humanities. Potential candidates enrolled
themselves or were registered by healthcare providers via an
application integrated with the website www.e-psyche.eu. They
filled in demographic information and completed online tests,
including: Somatoform Dissociation Questionnaire (SDQ-20,
Pietkiewicz et al., 2018) and Trauma Experiences Checklist
(Nijenhuis et al., 2002). Those with elevated SDQ-20 scores
(above 28 points) or those referred for differential diagnosis were
consulted and if dissociative symptoms were confirmed, they
were invited to participate in an in-depth clinical assessment
including a series of interviews, video-recorded and performed at
the researcher’s office by the first author who is a psychotherapist
and supervisor experienced in the dissociation field. In Poland,
there are no gold standards for diagnosing dissociative disorders.
The first interview was semi-structured, open-ended and
explored the patient’s history, main complaints and motives for
participation. It included questions such as: What made you
participate in this study? What are your main difficulties or
symptoms in daily life? What do you think caused them? Further
questions were then asked to explore participants’ experiences
and meaning-making. This was followed by the Trauma and
Dissociation Symptoms Interview (TADS-I, Boon and Matthess,
2017). The TADS-I is a new semi-structured interview intended
to identify DSM-5 and ICD-11 dissociative disorders. The
TADS-I differs in several ways from other semi-structured
interviews for the assessment of dissociative disorders. Firstly,
it includes a significant section on somatoform dissociative
symptoms. Secondly, it includes a section addressing other
trauma-related symptoms for several reasons: (1) to obtain a
more comprehensive clinical picture of possible comorbidities,
including symptoms of PTSD and complex PTSD, (2) to gain
a better insight into the (possible) dissociative organization of
the personality: patient’s dissociative parts hold many of these
comorbid symptoms and amnesia, voices or depersonalisation
experiences are often associated with these symptoms; and (3)
to better distinguish between complex dissociative disorders,
personality disorders and other Axis I disorders and false positive
DID. Finally, the TADS-I also aims to distinguish between
symptoms of pathological dissociation indicating a division of
the personality and symptoms which are related to a narrowing
or a lowering of consciousness, and not to the structural
dissociation of the personality. Validation testing of the TADS-I
is currently underway. TADS interviews ranging from 2 to 4 h
were usually held in sessions of 90 min. Interview recordings
were assessed by three healthcare professionals experienced in
the dissociation field, who discussed each case and consensually
came up with a diagnosis based on ICD-10. An additional mental
state examination was performed by the third author who is
a psychiatrist, also experienced in the differential diagnosis of
dissociative disorders. He collected medical data, double-checked
the most important symptoms, communicated the results and
discussed treatment indications. Qualitative data collected from
MATERIALS AND METHODS
This study was carried out in Poland in 2018 and
2019. Rich qualitative material collected during in-depth
clinical assessments was subjected to the interpretative
phenomenological analysis (IPA), a popular methodological
framework in psychology for exploring people’s personal
experiences and interpretations of phenomena (Smith and
Osborn, 2008). IPA was selected to build a deeper understanding
of how patients who endorsed and identified with dissociative
identity disorder made sense of the diagnosis and what
it meant for them to be classified as false-positive cases
during reassessment.
Interpretative
phenomenological
analysis
uses
phenomenological, hermeneutic, and idiographic principles. It
employs ‘double hermeneutics,’ in which participants share their
experiences and interpretations, followed by researchers trying
to make sense and comment on these interpretations. IPA uses
small, homogenous, purposefully selected samples, and data
are carefully analyzed case-by-case (Smith and Osborn, 2008;
Pietkiewicz and Smith, 2014).
Frontiers in Psychology | www.frontiersin.org
3
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
who also developed the TADS-I. They are all mentors and
trainers of the European Society for Trauma and Dissociation,
with significant expertise in the assessment of post-traumatic
conditions. The first co-investigator (AB) has a master’s degree in
psychology and is a Ph.D. candidate. She is also a psychotherapist
in training. All authors coded and discussed their understanding
of data. Their understanding and interpretations of symptoms
reported by participants were influenced by their background
knowledge and experience in diagnosing and treating patients
with personality disorders and dissociative disorders.
six patients out of 85 were selected for this interpretative
phenomenological analysis, based on the following criteria for
inclusion, which could ensure a homogenous sample expected of
IPA studies – (a) female, (b) previously diagnosed or referred to
rule in/out DID, (c) endorsement and identification with DID, (d)
dissociative disorder disconfirmed in the assessment. Interviews
with every participant in this study ranged from 3 h 15 min to 7 h
20 min (mean: 6 h).
Participants
Participants of this IPA were six female patients aged between
22 and 42 years who were selected out of 86 people examined
in a larger study exploring dissociation and alterations in
consciousness in clinical and non-clinical groups. (Participants
in the larger study met criteria of different diagnoses and
seven among them had ‘genuine’ DID). These six patients did
not meet DID criteria on the TADS-I interview but believed
themselves that they qualified for that diagnosis. Four of them
had higher education, two were secondary school graduates.
All of them registered in the study by themselves hoping to
confirm their diagnosis but two (Olga and Katia) were referred
by psychiatrists, and the others by psychotherapists. All of them
traveled from far away, which showed their strong motivation
to participate in the assessment. Four had previously had
psychiatric treatment and five had been in psychotherapy due
to problems with emotional regulation and relationships. In
the cases of Victoria and Dominique, psychotherapy involved
working with dissociative parts. None of them recalled any
physical or sexual abuse, but three (Dominique, Victoria, and
Mary), following therapists’ suggestions, were trying to seek
such traumatic memories to justify their diagnosis. They all felt
emotionally neglected by carriers in childhood and emotionally
abused by significant others. None of them reported symptoms
indicating the existence of autonomous dissociative parts.
None had symptoms indicating amnesia for daily events, but
four declared not remembering single situations associated
with conflicting emotions, shame, guilt, or conversations
during which they were more focused on internal experiences
rather than their interlocutors. None experienced PTSD
symptoms (e.g., intrusive traumatic memories and avoidance),
autoscopic phenomena (e.g., out-of-body experiences), or
clinically significant somatoform symptoms. None had
auditory verbal hallucinations but four intensely engaged in
daydreaming and experienced imagined conversations as very
real. All of them had been seeking information about DID
in literature and the Internet. For more information about
them see Table 2. Their names have been changed to protect
their confidentiality.
Data Analysis
Verbatim transcriptions were made of all video recordings, which
were analyzed together with researchers’ notes using qualitative
data-analysis software – NVivo11. Consecutive analytical steps
recommended for IPA were employed in the study (Pietkiewicz
and Smith, 2014). For each interview, researchers watched
the recording and carefully read the transcript several times.
They individually made notes about body language, facial
expressions, the content and language use, and wrote down
their interpretative comments using the ‘annotation’ feature
in NVivo10. Next, they categorized their notes into emergent
themes by allocating descriptive labels (nodes). The team then
compared and discussed their coding and interpretations. They
analyzed connections between themes in each interview and
between cases, and grouped themes according to conceptual
similarities into main themes and sub-themes.
Credibility Checks
During each interview, participants were encouraged to give
examples illustrating reported symptoms or experiences.
Clarification questions were asked to negotiate the meaning
participants wanted to convey. At the end of the interview,
they were also asked questions to check that their responses
were thorough. The researchers discussed each case thoroughly
and also compared their interpretative notes to compare
their understanding of the content and its meaning (the
second hermeneutics).
RESULTS
Participants in this study explained how they concluded they
were suffering from DID, developed knowledge about the
syndrome and an identity of a DID patient, and how this affected
their everyday life and relationships. Five salient themes appeared
in all interviews, as listed in Table 3. Each theme is discussed
and illustrated with verbatim excerpts from the interviews, in
accordance with IPA principles.
The Researchers
Theme 1: Endorsement and
Identification With the Diagnosis
The principal investigator (IJP) is a psychotherapist, supervisor,
and researcher in the field of community health psychology
and clinical psychology. The second co-investigator (RT) is
a psychiatrist, psychotherapist, and supervisor. The third coinvestigator (SB) is a clinical psychologist, psychotherapist,
supervisor, and a consulting expert in forensic psychology,
Frontiers in Psychology | www.frontiersin.org
All six participants hoped to confirm they had DID. They
read books and browsed the Internet seeking information about
dissociation, and watched YouTube videos presenting people
describing multiple personalities. Dominique, Victoria, Mary,
4
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
TABLE 2 | Study participants.
Name
Participant’s characteristics
Victoria
Age 22, single, lives with parents and younger brother. Stopped her studies after 3 years and was hospitalized in a psychiatric facility for a short period
due to problems with emotions and relationships. Reports difficulties with recognizing and expressing emotions, emptiness, feels easily hurt and
rejected, afraid of abandonment. Perceives herself as unimportant and worthless, sometimes cuts herself for emotional relief. Maintains superficial
relationships, does not trust people; in childhood was frequently left alone with grandparents because her parents traveed; described her parents as
setting high expectations, mother as getting easily upset and impulsive. No substance use. No history of physical or sexual trauma. Her maternal
grandfather abused alcohol but was not violent; no history of suicides in her family. Scored 38 points in SDQ-20 but no significant somatoform
symptoms reported during clinical assessment.
Karina
Age 22, single, secondary education. Enrolled in university programs twice but stopped. Acting is a hobby; recently worked as a waitress or hostess,
currently unemployed. Has had psychiatric treatment for 17 years due to anxiety and problems in relationships. Two short hospital admissions; in
psychodynamic psychotherapy in last 2 years. Reports emotional instability, feeling depressed, anxious, and lonely; maintains few relationships;
experiences conflicts with expressing anger and needs for dependency, no self-harm. She had periods of using alcohol excessively in the past, currently
once a month, no drugs. No family members used psychiatric help. Reports abandonment, emotional and physical abuse in childhood and eagerly
talks about these experiences. Scored 68 points in SDQ-20 but no significant somatoform symptoms reported during clinical assessment.
Dominique
Age 33, higher education, married, three children. Works as a playwright, comes from an artistic family. Was given away to her grandparents as a baby
and returned to parents and brothers when she was seven; often felt abandoned and neglected. She had learning difficulties and problems in
relationships, mood regulation, auto-aggressive behavior, feelings of emptiness and loneliness. Denies using alcohol or drugs; at secondary school
abused marihuana. Her paternal grandmother had psychosis, her father abused marihuana and mother was treated for depression. Reports poverty at
home. No suicides in family. Often retreated into her fantasy world in which she developed a story about boys kept in a resocialisation center. Has had
psychiatric treatment and counseling for 20 years. Scored 52 points in SDQ-20 but no somatoform symptoms confirmed during clinical assessment.
Mary
Age 34, higher education, married. Works in the creative industry and engaged in proselytic activities as an active Jehovah’s Witness (joined the
organization 10 years earlier, encouraged by her mother). Has had EMDR therapy for 2 years due to problems maintaining relationships and managing
anger. When her therapist asked if she felt there were different parts inside her, she started exploring information about DID. She denies smoking or
using any drugs, alcohol. Mother suffered from mild depression. No suicides in family. Scored 48 points in SDQ-20 but no somatoform symptoms
confirmed during clinical assessment.
Olga
Age 40, higher education, single. Works in social care. Reports depressive mood, low self-esteem, difficulties with concentration, problems with social
contacts. Occasionally uses alcohol in small doses, no drugs. Describes her mother as demanding but also distant and negligent because she was
busy with her medical practice. Father withdrawn and depressed but never used psychiatric treatment. No other trauma history. No suicides in family.
Tried psychotherapy four times but usually terminated treatment after a while. Her psychiatrist referred her for evaluation of memory problems, and
confirming DID. Scored 31 points in SDQ-20; confirms a few somatoform symptoms: headaches, symptoms associated with cystitis, detachment from
bodily sensations.
Katia
Age 42, post-graduate education. Unemployed. On social benefits for 15 years due to neurological and pulmonary symptoms, complications after
urological surgeries. Reports low self-esteem, self-loathing, problems in establishing or maintaining relationships, feeling lonely, rejected and not
understood. Inclinations toward passive-aggressive behavior toward people representing authority, fatigue, insecurity about her financial situation.
Reports no alcohol or drug use. Mother treated for depression. No suicides in family. Scored 69 points in SDQ-20; multiple somatic complaints
associated with Lyme disease, describes mother as emotionally and physically abusive, and father as abandoning and unprotecting. Has never used
psychotherapy; was referred for consultation by a psychiatrist after persuading him that she had DID symptoms.
Participants names have been changed to protect their confidentiality.
During an argument with my mother I felt as if some incredible
force took control and I smashed the glass in the cabinet with my
hand. It was like being under control of an alien force. I started
reading about borderline and I thought I had it. I found a webpage
about that and told my mother I should see a psychiatrist. I went
for a consultation and told her my story. This lady said: “Child,
you don’t have borderline, but multiple personality.” She wanted
to keep me in the psychiatric unit but I did not agree to stay for
observation. (Dominique).
TABLE 3 | Salient themes identified during the interpretative
phenomenological analysis.
Theme 1:
Endorsement and identification with the diagnosis
Theme 2:
Using the notion of dissociative parts to justify identity confusion
and conflicting ego-states
Theme 3:
Gaining knowledge about DID affects the clinical presentation
Theme 4:
Fragmented personality becomes an important discussion topic
with others
Theme 5:
Ruling out DID leads to disappointment or anger.
This led Dominique to research the new diagnosis. Karina also
said she was encouraged to seek information about DID, when a
doctor suggested she might be suffering with it.
When I was 11, I had problems at school and home. Other
children made fun of me. My mom took me to a doctor and he
said I had borderline, but later I was diagnosed with an anxiety
disorder. That doctor also suggested I had DID and told me that I
should read more about this diagnosis. (Karina).
and Karina said that a mental health professional suggested
this diagnosis to them. Dominique remembers consulting a
psychiatrist when she was 15, because she had problems
controlling anger at home or in public places. She initially
found descriptions of borderline personality captured her
experiences well enough, but a psychiatrist refuted the idea and
recommended further diagnostics toward a dissociative disorder.
However, the girl refused to go to hospital for observation.
Frontiers in Psychology | www.frontiersin.org
Victoria and Mary shared similar stories about
psychotherapists suggesting the existence of dissociative parts,
having readily accepted this new category as a good explanation
5
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
for aggressive impulses or problems with recalling situations
evoking guilt or shame. Dominique and Victoria stressed,
however, that, apart from feeling emotionally abandoned, they
could not trace any significant traumas in their early childhoods,
although therapists maintained that such events must be present
in dissociative patients.
different expectations. Whoever comes up front, then I have these
ideas. (Dominique).
Dominique neither had amnesia nor found evidence for
leading separate lives and engaging herself in activities associated
with her characters. She maintained her job as a playwright, and
merely imagined alternative scenarios of her life, expressed by
her inner heroes. In other parts of the interview, she referred
to them as ‘voices inside,’ but admitted she never heard them
acoustically. They were her own vivid thoughts representing
different, conflicting opinions or impulses.
Katia said she felt internally fragmented. There were times
when she engaged in certain interests, knowledge and skills, but
she later changed her goals. Fifteen years ago she gave up her
academic career and went on sickness benefit when she became
disabled due to medical problems; she experienced this as a great
loss, a failure, which affected her sense of identity and purpose.
I have no idea why I have this [DID]. My therapist looked for
evidence of childhood trauma, which sounds like the easiest
explanation, but I don’t feel I had any horrific memories which
I threw out of my consciousness. (Victoria).
Katia and Olga had used psychiatric treatment for anxiety
and depression for years. After exploring information about
different mental disorders they concluded they had DID.
They thought there was a similarity between their personal
experiences and those of people publishing testimonials about
multiple personalities.
In recent years I have a growing sense of identity fragmentation. I
have problems with defining my identity because it changes. I used
to feel more stable in the past. I had these versions of myself which
were more dominating, so I had a stronger sense of identity. For
example, 20 years ago there was this scientist. I was studying and
felt like a scientist, attending conferences. Now I don’t have that
and I don’t know who I am. [. . .] I also have changing interests and
hobbies because of different personalities. Long ago I liked certain
music, played the guitar, sang songs. I don’t do that anymore, I
suddenly lost interest in all that. (Katia).
I tried to understand this battle inside, leading me to stagnation.
I didn’t know how to describe that but I recently bought a book
Healing the fragmented selves of trauma survivors, and everything
was explained there. Some of these things I have discovered myself
and some were new to me. (Olga).
Subsequently, Katia presented to her doctor a review
of literature about DID, trying to persuade him that she
had this disorder.
Theme 2: Using the Notion of
Dissociative Parts to Justify Identity
Confusion and Conflicting Ego-States
She described changes in her professional and social lives
in terms of switches between dissociative parts. Although she
maintained the first person narrative (“I was studying,” “I played,”
or “I sang”), indicating some sense of continuity, she thought it
proved the existence of two or more distinct personalities.
Participants also reported thoughts, temptations, impulses or
actions which seemed to evoke conflicting feelings. Attributing
them to ‘something inside that is not-me’ could free them from
guilt or shame, so they used a metaphor of someone taking over,
logging in, or switching. Dominique thought it was inappropriate
to express disappointment or anger, but she accepted the thought
that her dissociative parts were doing this.
Once participants had embraced the idea of having multiple
personalities, they seemed to construct inner reality and justify
conflicting needs, impulses or behaviors as an expression of
dissociative parts. They referred to being uncertain about who
they were and having difficulties recognizing personal emotions,
needs or interests. Some of them felt it was connected to a
negative cognition about themselves as worthless, unimportant,
and not deserving to express what they felt or wanted. Victoria
said she would rather define herself through the eyes of others:
When I’m angry at my therapist, it is not really me but somebody
inside who gets angry easily. Greg often switches on in such
situations and says: “Tell her this and this”. [. . .] I went to a shop
once and discovered that the price on the label was not for a whole
package of batteries but a single one. And suddenly Greg switched
on and had a row with the cashier. I mean, I did it, but wound up
by his anger. This is so weird, I wouldn’t react like that. They just
charged incorrectly and I would normally ignore that but Greg
said: “I give a shit about their mistakes. I won’t accept that.” What
a failure! (Dominique).
My therapist asked what I wanted or needed. It turned out that
without other people’s expectations or preferences to which I
normally adjust, I wouldn’t know who I am or what I want. I
usually engage in my friends’ hobbies and do what I think gives
them pleasure. Otherwise, I think they will not like me and reject
me, because I have nothing to offer. (Victoria).
Since a young age, Dominique tended to immerse herself in
a fantasy world, developing elaborated scenarios about people
living in a youth center administered by a vicious boss. Different
characters in her ‘Story’ represented specific features, interests
and plans she had.
Mary said she had parts that expressed anger, sadness,
and needs associated with attachment. She observed them and
allowed them to step in, when situations required.
Well, there is John who is a teacher and researcher. He teaches
mathematics. I have no skills in maths at all. Tim is a philosopher
and would like to train philosophers, enroll doctoral studies. He
would like me to study philosophy but the rest of the system
wants me to be a worrier. Ralf is a caring nurse and would
like to become a paramedic. It is difficult to reconcile all these
Frontiers in Psychology | www.frontiersin.org
There were situations in my life when the teenager must have been
active. She protected me. She is ready to fight; I am not like that
at all. I hate violence, and that teenager likes using force to protect
me. [. . .] My therapist suggested I call her after this interview if I
6
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
but not necessarily related to trauma. Katia said she recently
remembered the picture of the house and garden where she
played as a child and associated these experiences with moments
of joy. Karina also exemplified her flashbacks with ‘intrusions of
happy memories’ which belonged to other personalities:
do not feel well. I didn’t accept that but the [inner] girls got upset
and told me I needed her help. They made me comply, so I agreed
to call her if I do not feel well. It has always been like this. (Mary).
During assessment, no participant provided evidence for the
existence of autonomous dissociative parts. It seems that the
inner characters described by them personified unintegrated egostates which used to evoke conflicting feelings.
Sometimes I begin to laugh but this is not my laughter, but the
laughter of sheer joy. Someone inside me is very happy and wants
to talk about happy childhood memories, make jokes. (Karina).
Theme 3: Exploring Personal
Experiences via the Lens of Dissociation
Mary said a child part of her was responsible for flashbacks and
making comments about current situations. However, she later
denied hearing voices or having any other Schneider’s symptoms.
Reading books, websites and watching videos of people who
claimed to have DID, encouraged them to compare themselves,
talk about and express ‘multiple personalities.’ The participants
became familiar with specialist terms and learned about core
symptoms mentioned in psychiatric manuals.
I can hear her comments, that she does not like something. I can
be flooded by emotions and have flashbacks associated with that
child. For example, there is a trigger and I can see things that
this child has seen. She is showing me what was happening in
her life. (Mary).
I read First person plural which helped me understand what this
is all about. The drama of the gifted child and The body keeps the
score. More and more girls started to appear. There is a 6-month
old baby which showed up only 2 months ago, a sad 11-year
old teenager, and a 16-year old who thinks I am a loser. I was
a teenager like that. Now she is having problems and becoming
withdrawn there are fewer switches, because she knows we need
to help the little one first. (Mary).
Participants discussed their dissociative parts, their names and
features, exhibiting neither avoidance nor fear or shame. On
the contrary, they seemed to draw pleasure by smiling, showing
excitement and eagerness to produce more examples of their
unusual experiences. At the beginning of the interview, Karina
was very enthusiastic and said, “My heart is beating so fast, as if I
were in fight-or-flight mode.”
Olga was also inspired by books. Not only did she find
similarities to trauma survivors but she made new discoveries
and thought there were other experiences she had been unaware
of earlier. Victoria started using techniques which literature
recommended for stabilization in dissociative disorders. She
said these books helped her understand intense emotions and
improve concentration.
Theme 4: Talking About DID Attracts
Attention
Not only were multiple personalities a helpful metaphor for
expressing conflicting feelings or needs (already mentioned
in Theme 2), but they also became an important topic of
conversations with family or friends.
This explains everything that happens to me, why I get so angry.
I also found anchors helpful. I focus on certain objects, sounds or
smells which remind me where I am, instead of drifting away into
my thoughts. (Victoria).
My husband says sometimes: “I would like to talk to the little girl.”
He then says that I start behaving differently. I also talk to my
therapist using different voices. Sometimes, she addresses them
asking questions. If questions are asked directly, they respond, but
there are times I do not allow them to speak, because the teenager
part can be very mean and attacks people. (Mary).
It seemed that exploring information about DID encouraged
changes in participants’ clinical presentation. At first, they
merely struggled with emotional liability or detachment, internal
conflicts, and concentration problems. Later, they started
reporting intrusions of dissociative parts or using clinical terms
(e.g., flashback) for experiences which were not necessarily
clinical symptoms. Dominique said that the characters of her
story would often ‘log in’ and take control. She demonstrated
that during the interview by changing her voice and going
into a ‘trance.’ She created her own metaphors, explaining
these experiences and comparing them with those described in
literature. She stressed that she never had amnesia and remained
aware of what was happening during her ‘trance.’
It may have been easier for Mary to express her needs for
dependency and care by ascribing them to a little girl and,
because she felt awkward about feeling angry with the therapist,
attributing hostile impulses to a teenager could give her a sense
of control and reduce guilt. Karina decided to create a videoblog for documenting dissociative parts, and shared her videos
with people interested in DID. She said she was surprised to find
clips in which she looked dreadful, having her make-up smeared
all over the face, because she had no memory of doing that.
However, she showed no signs that it bothered her. She discussed
the videos with her best friend, a DID fan who had encouraged
her to enroll in the study in order to confirm her diagnosis.
They were collecting evidence to support the idea that she had
a dissociative disorder, which she presented one by one, before
being asked about details.
I think it is a form of dissociation on the emotional level. I read a
lot. . . The minds of Billy Milligan or First person plural. For sure, I
do not have an alteration of personality. I have co-consciousness.
My theory is, we are like a glove, we all stem from one trunk, but
we are like separate fingers. (Dominique).
Mark [her friend] reads a lot about DID. He says I sometimes talk
in a high voice which is not the way I usually talk. He refers to
us as plural. [. . .] In some of these videos I do not move or blink
While participants maintained they had flashbacks, they
understood them as sudden recollections of past memories
Frontiers in Psychology | www.frontiersin.org
7
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
for a minute. I look at some point and there is no expression on
my face. I can remember things until this moment, and later I
discover myself looking like something from Creepypastas. I am
so sorry for people who have to see this. . . and I found my diary.
I have been writing diaries since I was seven. I sometimes have no
memory for having written something. I need to find these notes
because I would like to write a book about a fantasy world and
inner conflicts. (Karina).
another possibility. It is some information but I have not heard
anything new. (Karina).
Only Victoria seemed relieved that her DID diagnosis was not
confirmed. She was happy to discuss how attachment problems or
conflicts with expressing emotions and needs affected her social
life and career, and receive guidelines for future treatment. She
felt liberated from having to uncover childhood traumas that her
therapist expected her to have as a dissociative patient.
Dominique and Katia also wrote journals to record
dissociative experiences. Katia hoped to be recognized as
an expert-by-experience and develop her career in relation
to that. She brought with her a script of a book she hoped
to publish 1 day.
I was hoping that you would find another explanation for my
problems. . . for what is wrong with me, why I feel so sensitive
or spaced out, because it is annoying. I would like to know what is
going on. I don’t think I’ve had any severe trauma but everybody
wants to talk about trauma all the time. (Victoria).
Theme 5: Ruling Out DID Leads to
Disappointment or Anger
DISCUSSION
Four participants were openly disappointed that their DID
diagnosis was not confirmed. They doubted if their descriptions
were accurate enough, or they challenged the interviewer’s
understanding of the symptoms. Katia also suggested that she
was incapable of providing appropriate answers supporting her
diagnosis due to amnesia and personality alterations.
ICD-10 and DSM-5 provide inadequate criteria for diagnosing
DID, basically limited to patients having distinct dissociative
identities with their own memories, preferences and behavioral
patterns, and episodes of amnesia (American Psychiatric
Association, 2013; World Health Organization, 1993). Clinicians
without experience of DID may therefore expect patients
to present disruptions of identity during a consultation and
spontaneously report memory problems. However, trauma
specialists view DID as a ‘disorder of hiddenness’ because patients
often find their dissociative symptoms bizarre and confusing and
do not disclose them readily due to their shame and the phobia
of inner experiences (Steele et al., 2005, 2016; Van der Hart et al.,
2006). Instead, they tend to undermine their significance, hide
them and not report them during consultations unless asked
about them directly. Dissociative patients can also be unaware
of their amnesia and ignore evidence for having done things
they cannot remember because realizing that is too upsetting.
Contrary to that, this study and the one conducted in 1999 in
the Netherlands by Draijer and Boon, show that some people
with personality disorders enthusiastically report DID symptoms
by the book, and use the notion of multiple personalities to
justify problems with emotional regulation, inner conflicts, or
to seek attention. As with Dutch patients, Polish participants
were preoccupied with their alternate personalities and two
tried to present a ‘switch’ between parts. Their presentations
were naïve and often mixed with lay information on DID.
However, what they reported could be misleading for clinicians
inexperienced in the dissociation field or those lacking the
appropriate tools to distinguish a genuine dissociative disorder
from an imitated one.
Therefore, understanding the subtleties about DID clinical
presentation, especially those which are not thoroughly described
in psychiatric manuals, is important to come up with a correct
diagnosis and treatment plan. Various clinicians stress the
importance of understanding the quality of symptoms and
the mechanisms behind them in order to distinguish on the
phenomenological level between borderline and DID patients
(Boon and Draijer, 1993; Laddis et al., 2017). Participants in
this study reported problems with identity, affect regulation
Do you even consider that I might give different answers if
you had asked these questions 2 or 5 years ago? I must have
erased some examples from my memory and not all experiences
belong to me. I know that people can unconsciously modify their
narratives and that is why I wanted an objective assessment.
[. . .] Nobody believed I was resistant to anesthetics until I was
diagnosed with some abnormalities. It was once written in my
medical report that I was a hypochondriac. One signature and
things become clear to everyone. Sometimes it is better to have
the worst diagnosis, but have it. (Katia).
She expected that the diagnosis would legitimize her
inability to establish satisfactory relationships, work, and become
financially independent. For this reason, she also insisted that the
final report produced for her should contain information about
how she felt maltreated by family or doctors, and revealed her
hopes to claim damages for health injury. Mary and Karina were
also upset that the interviewers did not believe they had DID.
Can you try to imagine how hard it is? I am not making things
up? You don’t believe me. I am telling you things and you must
be thinking, from the adult perspective: “You are making this up.”
Nothing pisses me off more than someone who is trying to prove
to others that they have just imagined things. They [dissociative
parts] feel neglected again, as always! (Mary).
Karina tried to hide her disappointment and claimed she was
glad she didn’t have a severe mental illness. However, she thought
she would need to build another theory explaining her symptoms.
After the interview, she sent more videos trying to prove the
assessment results were not accurate.
What about my problems then? I am unable to set boundaries,
I have anxiety, I fear that a war might break out. If this
is not dissociation, then what? I had tests and they ruled
out any neurological problems. I came here and ruled out
Frontiers in Psychology | www.frontiersin.org
8
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
dissociative parts which are stuck in trauma. In addition
to avoidance, this is another characteristic PTSD feature
observed in the clinical presentation of DID patients (Van
der Hart et al., 2010). Interestingly, participants in this
study showed no evidence for intrusions (images, emotions
or somatosensory experiences directly related to trauma),
but rather problems with emotional regulation (illustrated in
sections “Themes 1 and 2”). Asked about intrusive images,
emotions or thoughts, some gave examples of distressing
thoughts attacking self-image and blaming for their behavior.
This, however, was related to attachment problems and
difficulties with self-soothing. They also revealed a tendency
to indulge themselves in these auto-critical thoughts instead of
actively avoiding them, which is often a case in dissociative
patients. Some intrusions reported by DID patients are
somatoform in nature and connected with dissociative parts
stuck in trauma time (Pietkiewicz et al., 2018). Although
three participants in this study had very high scores in
SDQ-20 indicating that they may have a dissociative disorder
(scores of 50–60 are common in DID), further interviews
revealed that they aggravated their symptoms and, in fact,
had low levels of somatoform dissociation. This shows that
tests results should be interpreted with caution and clinicians
should always ask patients for specific examples of the
symptoms they report.
and internal conflicts about expressing their impulses. Some
of them also had somatic complaints. These symptoms are
common in personality disorders and also in dissociative
disorders, which are polysymptomatic by nature. However,
the quality of these symptoms and psychological mechanisms
behind them may be different. For a differential diagnosis,
clinicians need to become familiar with the unique internal
dynamics in people who have developed a structural dissociation
of personality as a result of trauma. These patients try to
cope with everyday life and avoid actively thinking about
and discussing traumatic memories, or experiencing symptoms
associated with them. Because of that avoidance, they find
it challenging to talk about dissociative symptoms with a
clinician. Besides experiencing fear of being labeled as insane
and sent to hospital, there may be internal conflicts associated
with disclosing information. For example, dissociative parts
may forbid them to talk about symptoms or past experiences.
This conflict can sometimes be indicated by facial expression,
involuntary movements, spasms, and also felt by the clinician
in his or her countertransference. In other words, it is not
only what patients say about their experiences, but how they
do this. Therapists’ observations and countertransference may
help in assessing the quality of avoidance: How openly or easily
do patients report symptoms or adverse life experiences? Is
that associated with strong depersonalisation (detachment from
feelings and sensations, being absent)? Is there evidence for
internal conflicts, shame, fear or feeling blocked when talking
about symptoms (often observed in facial expression, tone of
voice)? Participants in this study were eager to talk about how
others mistreated them and wanted to have that documented
on paper. Difficult experiences in the past sometimes triggered
intense emotions in them (anger, resentment, and deep sadness)
but they did not avoid exploring and communicating these
states. On the contrary, they eagerly shared an elaborate
narrative of their sorrows and about their inner characters –
the multiple personalities they were convinced they had.
They became keen on DID and used a variety of resources
to familiarize themselves with core symptoms. They also
spontaneously reported them, as if they wanted to provide
sound evidence about having DID and were ready to defend
their diagnosis. Some planned their future based on it (an
academic career, writing a book, or a film). During the
interviews, it became clear that some perceived having an
exotic diagnosis as an opportunity for seeking attention and
feeling unique, exhibiting the drama of an ‘unseen child’ (see
section “Theme 4”).
Understanding a few of the symptoms identified in this
study can be useful for differential diagnosis: intrusions,
voices, switches, amnesia, use of language, depersonalisation.
How they are presented by patients and interpreted by
clinicians is important.
Voices
It is common for DID patients to experience auditory
hallucinations (Dorahy et al., 2009; Longden et al., 2019).
The voices usually belong to dissociative parts and comment
on actions, express needs, likes and dislikes, and encourage
self-mutilation. Subsequently, there may be conflicts between
‘voices,’ and the relationship with them is quite complex.
Dorahy et al., 2009 observe that auditory hallucinations
are more common in DID than in schizophrenia. In
dissociative patients they are more complex and responsive,
and already appear in childhood. Specifically, child voices
are also to be expected in DID (97% in comparison to 6%
in psychosis). None of our participants reported auditory
hallucinations although one (Dominique) said she had
imaginary friends from childhood. While this could sound
like a dissociative experience, exploring their experiences
showed she had a tendency to absorb herself in her fantasy
world and vividly imagine characters in her story (see
section “Theme 2”).
Switches
Literature also shows that it is uncommon for avoidant
dissociative patients to present autonomous dissociative parts
to a therapist before a good relationship has been established
and the phobia for inner experiences reduced (Steele et al.,
2005). Sudden switches between dissociative personalities may
occur only when the patient is triggered and cannot exercise
enough control to hide his or her symptoms. Two participants
in this study (Dominique and Karina) tried to present ‘alternate
personalities’ and they actually announced this would happen,
so that the interviewer did not miss them. Later on, they could
Intrusions
Triggered by external or internal factors (memories or anything
associated with trauma) dissociative patients tend to relive
traumatic experiences. In other words, they have intrusive
memories, emotions or sensorimotor sensations contained by
Frontiers in Psychology | www.frontiersin.org
9
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
attacks to other parts, not-me (see: Dominique in section
“Theme 2”). One might suspect it could be evidence for
autonomous dissociative parts. However, these participants seem
to have had unintegrated, unaccepted self-states and used the
concept of DID to make meaning of their internal conflicts.
In their narrative they maintained the first-person narrative.
None of them provided sound evidence for extreme forms of
depersonalisation, such as not feeling the body altogether or
out-of-body experiences.
There can be many reasons why people develop symptoms
which resemble those typical of DID. Suggestions about a
dissociative disorder made by healthcare providers can help
people justify and explain inner conflicts or interpersonal
problems. In this study several clinicians had suggested a
dissociative disorder or DID to the patient. Literature on
multiple personalities and therapy focused on them, and
using expressions such as ‘parts’, ‘dissociating’, ‘switches,’ can
also encourage demonstrating such symptoms. There are also
secondary gains explained in this study, such as receiving
attention and care. Draijer and Boon (1999) observe that
people with borderline features justified shameful behavior
and avoided responsibility by attributing their actions to
‘alter personalities.’ Such people can declare amnesia for
their outbursts of anger, or hitting partners. Others explained
their identity confusion and extreme emptiness using the
DID model. All their participants reported emotional neglect
and felt unseen in their childhood, so they adopted a
new DID-patient identity to fill up inner emptiness (Draijer
and Boon, 1999). Just like the participants in this study,
they were angry when that diagnosis was disconfirmed
during the assessment, as if the clinician had taken away
something precious from them. This shows that communicating
the results should be done with understanding, empathy
and care. Patients and clinicians need to understand and
discuss reasons for developing a DID-patient identity, its
advantages and pitfalls.
In countries where clinicians are less familiar with the
dissociative pathology, there may be a greater risk for both falsenegative and false-positive DID diagnoses. The latter is caused
by the growing popularity of that disorder in media and social
networks. People who try to make meaning of their emotional
conflicts, attachment problems and difficulties in establishing
satisfactory relationships, may find the DID concept attractive.
It is important that clinicians who rule out or disconfirm DID,
also provide patients with friendly feedback that encourages
using treatment for their actual problems. Nevertheless, this
may still evoke strong reactions in patients whose feelings
and needs have been neglected, rejected or invalidated by
significant others. Disconfirming DID may be experienced by
them as an attack, taking something away from them, or an
indication that they lie.
relate to what happened during the alleged switch (no amnesia),
maintaining the first-person perspective (I was saying/doing).
Contrary to that, dissociative patients experience much shame
and fear of disclosing their internal parts (Draijer and Boon,
1999). If they become aware that switches had occurred, they try
to make reasonable explanations for the intrusions of parts and
unusual behavior (e.g., I must have been very tired and affected
by the new medicine I am taking).
Amnesia
Dell (2006) mentions various indicators of amnesia in patients
with DID. However, losing memory for unpleasant experiences
may occur in different disorders, usually for behaviors evoking
shame or guilt, or for actions under extreme stress (Laddis
et al., 2017). All patients in this study had problems with
emotional regulation and some said they could not remember
what they said or did when they became very upset. With
some priming, they could recall and describe events. For this
reason, it is recommended to explore evidence for amnesia for
pleasant or neutral activities (e.g., doing shopping or cleaning,
socializing). According to Laddis et al. (2017) there are different
mechanisms underlying memory problems in personality and
dissociative disorders.
Use of Language
Participants in this study often used clinical jargon (e.g.,
flashbacks, switches, and feeling depersonalized) which indicates
they had read about dissociative psychopathology or received
psycho-education. However, they often had lay understanding
of clinical terms. A good example in this study was having
‘flashbacks’ of neutral or pleasant situations which had once been
forgotten. Examples of nightmares did not necessarily indicate
reliving traumatic events during sleep (as in PTSD) but expressed
conflicts and agitation through symbolic, unrealistic, sometimes
upsetting dreams. When talking about behavior of other parts
and their preferences, they often maintained a first-person
perspective. Requesting patients to provide specific examples
is thus crucial.
Depersonalisation
Detachment from feelings and emotions, bodily sensations
and external reality is often present in various disorders
(Simeon and Abugel, 2006). While these phenomena have
been commonly associated with dissociation, Holmes et al.
(2005) stress the differences between detachment (which can
be experienced by both dissociative and non-dissociative
patients) and compartmentalisation, associated with the
existence of dissociative parts. Allen et al. (1999) also stress
that extreme absorptive detachment can interfere with noticing
feelings and bodily sensations, and also memory. Some
participants in this study tended to enter trance-like states
or get absorbed in their inner reality, subsequently getting
detached from bodily sensations. They also described their
feeling of emptiness in terms of detachment from feelings.
Nevertheless, none of them disclosed evidence for having
distinct dissociative parts. Some of their statements might
have been misleading; for example, when they attributed anger
Frontiers in Psychology | www.frontiersin.org
Limitations and Further Directions
Among the 85 people who participated in a thorough diagnostic
assessment, there were six false-positive DID cases, and this study
focused on their personal experiences and meaning attributed
to the diagnosis. Because IPA studies are highly idiographic,
10
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
TABLE 4 | Red flags for identifying false-positive or imitated DID.
This table enumerates suggestive features of false positive or imitated DID cases identified in this study, which should be taken into consideration during diagnostic
assessment.
1. Directly or indirectly expects to confirm self-diagnosed DID.
2. DID previously suggested by someone (friend, psychologist, and doctor) without thorough clinical assessment.
3. Keen on DID diagnosis and familiarized with symptoms: read books, watched videos, talked to other patients, participated in a support group for dissociative
patients.
4. Uses clinical jargon: parts, alters, dissociating, switch, depersonalisation, etc.
5. Reveals little avoidance: eagerly talks about painful experiences and dissociation, no indicators for genuine shame or inner conflicts associated with disclosing
symptoms or parts.
6. Readily justifies losing control of emotions and unacceptable or shameful behavior in terms of not being oneself or being influenced by an alternative personality.
7. No evidence for the intrusions of unwanted and avoided traumatic memories or re-experiencing them in the present.
8. Denies having ego-dystonic thoughts or voices, especially starting in early childhood and child-like voices.
Note: Dissociative patients may be afraid, ashamed, or feel it is forbidden to talk about the voices.
9. No evidence of amnesia for neutral or pleasant everyday activities, e.g., working, doing shopping, socializing, playing with children.
10. Tries to control the interview and provide evidence for having DID, e.g., eagerly reports dissociative symptoms without being asked about them.
11. Announces and performs a switch between personalities during clinical assessment, especially before a good relationship with the clinician and trust has been
established.
12. Finds apparent gains associated with having DID: receives special interest from family and friends with whom symptoms and personalities are eagerly discussed,
runs support groups, blogs or video channels for people with dissociative disorders.
13. Gets upset or disappointed when DID is not confirmed, e.g., demands re-evaluation, excuses oneself for not being accurate enough in giving right answers, wants
to provide more evidence.
which suggested it was probable they had a dissociative
disorder. However, during a clinical diagnostic interview
they did not report a cluster of somatoform or psychoform
dissociative symptoms and did not meet criteria for any
dissociative disorder diagnosis. Clinicians also need to go
beyond the face value of a patient’s responses, ask for specific
examples, and notice one’s own countertransference. Draijer
and Boon (1999) observed that DID patients were often
experienced by clinicians as very fragile, and exploring
symptoms with people with personality disorders (who try
to aggravate them and control the interview) can evoke
tiredness or even irritability. It is important that clinicians
understand their own responses and use them in the
diagnostic process.
While psycho-education is considered a crucial element in
the initial treatment of dissociative disorders (Van der Hart
et al., 2006; Howell, 2011; Steele et al., 2016), patients whose
diagnosis has not been confirmed by a thorough diagnostic
assessment should not be encouraged to develop knowledge
about DID symptomatology, because this may affect their clinical
presentation and how they make meaning of their problems.
Subsequently, this may lead to a wrong diagnosis and treatment,
which can become iatrogenic.
they are by nature limited to a small number of participants.
There were two important limitations in this research. Firstly,
information about the level of psychoform symptoms has not
been given, because the validation of the Polish instrument
used for that purpose is not complete. Secondly, TADS-I used
for collecting clinical data about trauma-related symptoms and
dissociation has not been validated, either. Because there are no
gold standards in Poland for diagnosing dissociative disorders,
video-recordings of diagnostic interviews were carefully analyzed
and discussed by all authors to agree upon the diagnosis. Taking
this into consideration, further qualitative and quantitative
research is recommended to formulate and validate more
specific diagnostic criteria for DID and guidelines for the
differential diagnosis.
CONCLUSION
Clinicians need to understand the complexity of DID
symptoms and psychological mechanisms responsible
for them in order to differentiate between genuine and
imitated post-traumatic conditions. There are several features
identified in this study which may indicate false-positive or
imitated DID shown in Table 4, which should be taken into
consideration during diagnostic assessment. In Poland, as
in many countries, this requires more systematic training
in diagnosis for psychiatrists and clinical psychologists in
order to prevent under- and over-diagnosis of dissociative
disorders, DID in particular. It is not uncommon that
patients exaggerate on self-report questionnaires when
they are invested in certain symptoms. In this study, all
participants had scores above the cut-off score of 28 on
the SDQ-20, a measure to assess somatoform dissociation,
Frontiers in Psychology | www.frontiersin.org
DATA AVAILABILITY STATEMENT
The datasets generated for this study are not readily available
because data contain highly sensitive clinical material,
including medical data which cannot be shared according
to local regulations. Requests to access the datasets should
be directed to IP, [email protected].
11
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
interviews and helped in literature review and manuscript
preparation. RT performed psychiatric assessment and
helped in data analysis and manuscript preparation. SB
helped in data analysis and manuscript preparation. All
authors contributed to the article and approved the
submitted version.
ETHICS STATEMENT
The studies involving human participants were reviewed
and approved by Ethical Review Board at the SWPS
University of Social Sciences and Humanities. The
patients/participants provided their written informed consent
to participate in this study.
FUNDING
AUTHOR CONTRIBUTIONS
Grant number 2016/22/E/HS6/00306 was obtained for the study
“Interpretative phenomenological analysis of depersonalization
and derealization in clinical and non-clinical groups.”
IP collected qualitative data, performed the analysis, and
prepared the manuscript. AB-N transcribed and analyzed the
REFERENCES
Leonard, D., Brann, S., and Tiller, J. (2005). Dissociative disorders: pathways to
diagnosis, clinician attitudes and their impact. Aust. N. Z, J. Psychiatry 39,
940–946. doi: 10.1080/j.1440-1614.2005.01700.x
Longden, E., Moskowitz, A., Dorahy, M. J., and Perona-Garcelán, S. (2019).
Auditory Verbal Hallucinations: Prevalence, Phenomenology, and the
Dissociation Hypothesis Psychosis, Trauma and Dissociation: Evolving
Perspectives on Severe Psychopathology. (Hoboken, NJ: John Wiley & Sons
Ltd.), 207–222.
Nijenhuis, E., van der Hart, O., and Kruger, K. (2002). The psychometric
characteristics of the traumatic experiences checklist (TEC): first findings
among psychiatric outpatients. Clin. Psychol. Psychother. 9, 200–210. doi: 10.
1002/cpp.332
Pietkiewicz, I. J., Hełka, A., and Tomalski, R. (2018). Validity and reliability of
the Polish online and pen-and-paper versions of the somatoform dissociation
questionnaires (SDQ-20 and PSDQ-5). Eur. J. Trauma Dissociation 3, 23–31.
doi: 10.1016/j.ejtd.2018.05.002
Pietkiewicz, I. J., and Smith, J. A. (2014). A practical guide to using interpretative
phenomenological analysis in qualitative research psychology. Psychol. J. 20,
7–14. doi: 10.14691/CPPJ.20.1.7
Putnam, F. W., Guroff, J. J., Silberman, E. K., Barban, L., and Post, R. M. (1986). The
clinical phenomenology of multiple personality disorder: review of 100 recent
cases. J. Clin. Psychiatry 47, 285–293.
Ross, C. A., Norton, G. R., and Wozney, K. (1989). Multiple personality disorder:
an analysis of 236 cases. Can. J. Psychiatry 34, 413–418. doi: 10.1177/
070674378903400509
Sar, V. (2011). Epidemiology of dissociative disorders: an overview. Epidemiol. Res.
Int. 2011, 404538. doi: 10.1155/2011/404538
Simeon, D., and Abugel, J. (2006). Feeling Unreal. Depersonalization
Disorder and the Loss of the Self. New York, NY: Oxford University
Press.
Smith, J. A., and Osborn, M. (2008). “Interpretative phenomenological analysis,”
in Qualitative Psychology: A Practical Guide to Research Methods, ed. J. Smith
(London: Sage), 53–80.
Steele, K., Boon, S., and Van der Hart, O. (2016). Treating Trauma-Related
Dissociation. A Practical, Integrative Approach. New York, NY: W. W. Norton &
Company.
Steele, K., Van Der Hart, O., and Nijenhuis, E. R. (2005). Phase-oriented treatment
of structural dissociation in complex traumatization: overcoming traumarelated phobias. J. Trauma Dissociation 6, 11–53.
Thomas, A. (2001). Factitious and malingered dissociative identity disorder:
clinical features observed in 18 cases. J. Trauma Dissociation 2, 59–77. doi:
10.1300/J229v02n04_04
Van der Hart, O., Nijenhuis, E., and Steele, K. (2006). The Haunted Self: Structural
Dissociation and the Treatment of Chronic Traumatization. London: W.W.
Norton & Co.
Van der Hart, O., Nijenhuis, E. R., and Solomon, R. (2010). Dissociation of
the personality in complex trauma-related disorders and EMDR: theoretical
considerations. J. EMDR Pract. Res. 4, 76–92. doi: 10.1891/1933-3196.
4.2.76
Allen, J. G., Console, D. A., and Lewis, L. (1999). Dissociative detachment
and memory impairment: reversible amnesia or encoding failure? Compre.
Psychiatry 40, 160–171. doi: 10.1016/S0010-440X(99)90121-9
American Psychiatric Association (2013). Diagnostic and Statistical Manual of
Mental Disorders (DSM-5), Fifth Edn. Arlington, VA: American Psychiatric
Publishing.
Boon, S., and Draijer, N. (1993). The differentiation of patients with MPD or
DDNOS from patients with a cluster B personality disorder. Dissociation 6,
126–135.
Boon, S., and Matthess, H. (2017). Trauma and Dissociation Symptoms Interview
(TADS-I), version 1.9.
Boon, S. A., and Draijer, P. J. (1995). Screening en Diagnostiek van Dissociatieve
Stoornissen. Lisse: Swets & Zeitlinger.
Boysen, G. A., and VanBergen, A. (2014). Simulation of multiple personalities:
a review of research comparing diagnosed and simulated dissociative identity
disorder. Clin. Psychol. Rev. 34, 14–28. doi: 10.1016/j.cpr.2013.10.008
Brand, B. L., Webermann, A. R., and Frankel, A. S. (2016). Assessment of complex
dissociative disorder patients and simulated dissociation in forensic contexts.
Int. J. Law Psychiatry 49, 197–204. doi: 10.1016/j.ijlp.2016.10.006
Coons, P. M., and Milstein, V. (1994). Factitious or malingered multiple personality
disorder: eleven cases. Dissociation 7, 81–85.
Dell, P. F. (2006). A new model of dissociative identity disorder. Psychiatr. Clin. 29,
1–26. doi: 10.1016/j.psc.2005.10.013
Dorahy, M. J., Brand, B. L., Şar, V., Krüger, C., Stavropoulos, P., Martínez-Taboas,
A., et al. (2014). Dissociative identity disorder: an empirical overview. Aust.
N. Z. J. Psychiatry 48, 402–417. doi: 10.1177/0004867414527523
Dorahy, M. J., Shannon, C., Seagar, L., Corr, M., Stewart, K., Hanna, D., et al. (2009).
Auditory hallucinations in dissociative identity disorder and schizophrenia with
and without a childhood trauma history: similarities and differences. J. Nerv.
Ment. Dis. 197, 892–898. doi: 10.1097/NMD.0b013e3181c299ea
Draijer, N., and Boon, S. (1999). The imitation of dissociative identity disorder:
patients at risk, therapists at risk. J. Psychiatry Law 27, 423–458. doi: 10.1177/
009318539902700304
Friedl, M., Draijer, N., and De Jonge, P. (2000). Prevalence of dissociative disorders
in psychiatric in−patients: the impact of study characteristics. Acta Psychiatr.
Scand. 102, 423–428. doi: 10.1034/j.1600-0447.2000.102006423.x
Holmes, E. A., Brown, R. J., Mansell, W., Fearon, R. P., Hunter, E. C., Frasquilho, F.,
et al. (2005). Are there two qualitatively distinct forms of dissociation? a review
and some clinical implications. Clin. Psychol. Rev. 25, 1–23.
Howell, E. F. (2011). Understanding and Treating Dissociative Identity Disorder: A
Relational Approach. New York, NY: Routledge.
International Society for the Study of Trauma and Dissociation (2011). Guidelines
for treating dissociative identity disorder in adults, third revision. J. Trauma
Dissociation 12, 115–187. doi: 10.1080/15299732.2011.537247
Laddis, A., Dell, P. F., and Korzekwa, M. (2017). Comparing the symptoms and
mechanisms of “dissociation” in dissociative identity disorder and borderline
personality disorder. J. Trauma Dissociation 18, 139–173.
Frontiers in Psychology | www.frontiersin.org
12
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
World Health Organization (1993). The ICD-10 Classification of Mental and
Behavioural Disorders: Clinical Descriptions and Diagnostic Guidelines. Geneva:
World Health Organization.
Copyright © 2021 Pietkiewicz, Bańbura-Nowak, Tomalski and Boon. This is an
open-access article distributed under the terms of the Creative Commons Attribution
License (CC BY). The use, distribution or reproduction in other forums is permitted,
provided the original author(s) and the copyright owner(s) are credited and that the
original publication in this journal is cited, in accordance with accepted academic
practice. No use, distribution or reproduction is permitted which does not comply
with these terms.
Conflict of Interest: The authors declare that the research was conducted in the
absence of any commercial or financial relationships that could be construed as a
potential conflict of interest.
Frontiers in Psychology | www.frontiersin.org
13
May 2021 | Volume 12 | Article 637929
| I'm providing you with your source material. You will not be using any outside material. Your job is to answer questions about the material.
What are the key points of this paper?
ORIGINAL RESEARCH
published: 06 May 2021
doi: 10.3389/fpsyg.2021.637929
Revisiting False-Positive and
Imitated Dissociative Identity
Disorder
Igor Jacob Pietkiewicz* , Anna Bańbura-Nowak, Radosław Tomalski and Suzette Boon
Research Centre for Trauma & Dissociation, SWPS University of Social Sciences and Humanities, Katowice, Poland
Edited by:
Hamed Ekhtiari,
Laureate Institute for Brain Research,
United States
Reviewed by:
Hosein Mohaddes Ardabili,
Mashhad University of Medical
Sciences, Iran
Bo Bach,
Psychiatry Region Zealand, Denmark
*Correspondence:
Igor Jacob Pietkiewicz
[email protected]
Specialty section:
This article was submitted to
Psychopathology,
a section of the journal
Frontiers in Psychology
Received: 04 December 2020
Accepted: 14 April 2021
Published: 06 May 2021
Citation:
Pietkiewicz IJ, Bańbura-Nowak A,
Tomalski R and Boon S (2021)
Revisiting False-Positive and Imitated
Dissociative Identity Disorder.
Front. Psychol. 12:637929.
doi: 10.3389/fpsyg.2021.637929
ICD-10 and DSM-5 do not provide clear diagnosing guidelines for DID, making it
difficult to distinguish ‘genuine’ DID from imitated or false-positive cases. This study
explores meaning which patients with false-positive or imitated DID attributed to their
diagnosis. 85 people who reported elevated levels of dissociative symptoms in SDQ20 participated in clinical assessment using the Trauma and Dissociation Symptoms
Interview, followed by a psychiatric interview. The recordings of six women, whose
earlier DID diagnosis was disconfirmed, were transcribed and subjected to interpretative
phenomenological analysis. Five main themes were identified: (1) endorsement and
identification with the diagnosis. (2) The notion of dissociative parts justifies identity
confusion and conflicting ego-states. (3) Gaining knowledge about DID affects the
clinical presentation. (4) Fragmented personality becomes an important discussion
topic with others. (5) Ruling out DID leads to disappointment or anger. To avoid
misdiagnoses, clinicians should receive more systematic training in the assessment
of dissociative disorders, enabling them to better understand subtle differences in the
quality of symptoms and how dissociative and non-dissociative patients report them.
This would lead to a better understanding of how patients with and without a dissociative
disorder report core dissociative symptoms. Some guidelines for a differential diagnosis
are provided.
Keywords: dissociative identity disorder (DID), false-positive cases, personality disorder, dissociation, differential
diagnosis
INTRODUCTION
Multiple Personality Disorder (MPD) was first introduced in DSM-III in 1980 and re-named
Dissociative Identity Disorder (DID) in subsequent editions of the diagnostic manual (American
Psychiatric Association, 2013). Table 1 shows diagnostic criteria of this disorder in ICD-10, ICD11, and DSM-5. Some healthcare providers perceive it as fairly uncommon or associated with
temporary trends (Brand et al., 2016). Even its description in ICD-10 (World Health Organization,
1993) starts with: “This disorder is rare, and controversy exists about the extent to which it is
iatrogenic or culture-specific” (p. 160). Yet, according to the guidelines of the International Society
for the Study of Trauma and Dissociation (International Society for the Study of Trauma and
Dissociation, 2011), the prevalence of DID in the general population is estimated between 1 and
3%. The review of global studies on DID in clinical settings by Sar (2011) shows the rate from
Frontiers in Psychology | www.frontiersin.org
1
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
TABLE 1 | Diagnostic criteria for dissociative identity disorder.
ICD-10 Multiple personality disorder F44.81
(A) Two or more distinct personalities exist within the individual, only one being evident at a time.
(B) Each personality has its own memories, preferences, and behavior patterns, and at some time (and recurrently) takes full control of the individual’s behavior.
(C) There is inability to recall important personal information which is too extensive to be explained by ordinary forgetfulness.
(D) The symptoms are not due to organic mental disorders (F00–F09) (e.g., in epileptic disorders) or to psychoactive substance-related disorders (F10–F19)
(e.g.,
intoxication or withdrawal).
ICD-11 Dissociative identity disorder 6B64
Dissociative identity disorder is characterized by disruption of identity in which there are two or more distinct personality states (dissociative identities) associated with
marked discontinuities in the sense of self and agency. Each personality state includes its own pattern of experiencing, perceiving, conceiving, and relating to self, the
body, and the environment. At least two distinct personality states recurrently take executive control of the individual’s consciousness and functioning in interacting with
others or with the environment, such as in the performance of specific aspects of daily life such as parenting, or work, or in response to specific situations (e.g., those
that are perceived as threatening). Changes in personality state are accompanied by related alterations in sensation, perception, affect, cognition, memory, motor
control, and behavior. There are typically episodes of amnesia, which may be severe. The symptoms are not better explained by another mental, behavioral or
neurodevelopmental disorder and are not due to the direct effects of a substance or medication on the central nervous system, including withdrawal effects, and are not
due to a disease of the nervous system or a sleep-wake disorder. The symptoms result in significant impairment in personal, family, social, educational, occupational, or
other important areas of functioning.
DSM-5 Dissociative identity disorder 300.14
(A) Disruption of identity characterized by two or more distinct personality states, which may be described in some cultures as an experience of possession. The
disruption in identity involves marked discontinuity in sense of self and sense of agency accompanied by related alterations in affect, behavior, consciousness,
memory, perception, cognition, and/or sensory-motor functioning. These signs and symptoms may be observed by others or reported by the individual.
(B) Recurrent gaps in the recall of everyday events, important personal information, and/or traumatic events that are inconsistent with ordinary forgetting.
(C) The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.
(D) The disturbance is not a normal part of a broadly accepted cultural or religious practice. Note: In children, the symptoms are not better explained by imaginary
playmates or other fantasy play.
(E) The symptoms are not attributable to the physiological effects of a substance (e.g., blackouts or chaotic behavior during alcohol intoxication) or another medical
condition (e.g., complex partial seizures).
a false positive diagnosis, which is unfavorable for the patient,
because using treatment developed for DID with patients
without autonomous dissociative parts may be inefficient or even
reinforce their pathology.
Authors who wrote about patients inappropriately diagnosed
with this disorder used terms such as ‘malingering’ or ‘factitious’
DID (Coons and Milstein, 1994; Thomas, 2001). According
to Draijer and Boon (1999), both labels imply that patients
intentionally simulate symptoms, either for external gains
(financial benefits or justification for one’s actions in court) or
for other forms of gratification (e.g., interest from others), while
in many cases their motivation is not fully conscious. Getting
a DID diagnosis can also provide structure for inner chaos and
incomprehensible experiences, and be associated with hope and
belief it is real. On the other hand, diagnostic errors often result
in inappropriate treatment plans and procedures.
Already in 1995 Boon and Draijer stressed that a growing
number of people self-diagnosed themselves based on
information from literature and the Internet, and reported
symptoms by the book during psychiatric or psychological
assessment. Based on their observation of 36 patients in whom
DID had been ruled out after applying the structured clinical
interview SCID-D, these clinicians identified differences between
genuine and imitated DID. They classified their participants into
three groups: (1) borderline personality disorder, (2) histrionic
personality disorder, or (3) persons with severe dissociative
symptoms but not DID. Participants in that study reported
symptoms similar to DID patients, including: amnesia (but only
for unacceptable behavior), depersonalisation, derealisation,
identity confusion, and identity alteration. However, they
presented themselves and interacted with the therapist in very
0.4 to 14%. However, in studies using clinical diagnostic
interviews among psychiatric in-patients, and in European
studies these numbers were lower (Friedl et al., 2000). The
discrepancies apparently depend on the sample, the methodology
and diagnostic interviews used by researchers.
Diagnosing complex dissociative disorders (DID or Other
Specified Dissociative Disorder, OSDD) is challenging for several
reasons. Firstly, patients present a lot of avoidance and rarely
report dissociative symptoms spontaneously without direct
questioning (Boon and Draijer, 1993; International Society for
the Study of Trauma and Dissociation, 2011; Dorahy et al.,
2014). In addition, standard mental state examination does not
include these symptoms and healthcare professionals do not
receive appropriate training in diagnosing dissociative disorders
(Leonard et al., 2005). Secondly, complex dissociative disorders
are polysymptomatic, and specialists would rather diagnose these
patients with disorders more familiar to them from clinical
practice, e.g., anxiety disorders, eating disorders, schizophrenia,
or borderline personality disorder (Boon and Draijer, 1995; Dell,
2006; Brand et al., 2016). For these reasons, complex dissociative
disorders are underdiagnosed and often mis-diagnosed. For
example, 26.5–40.8% of DID patients would already have been
diagnosed and treated for schizophrenia (Putnam et al., 1986;
Ross et al., 1989). On the other hand, because there is so much
information about DID in the media (Hollywood productions,
interviews and testimonies published on YouTube, blogs), people
who are confused about themselves and try to find an accurate
diagnosis for themselves may learn about DID symptoms on the
Internet, identify themselves with the disorder, and later (even
unintentionally) report core symptoms in a very convincing
way (Draijer and Boon, 1999). This presents a risk of making
Frontiers in Psychology | www.frontiersin.org
2
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
different ways. While DID patients are usually reluctant to
talk about their symptoms and experience their intrusions as
shameful, people who imitated DID were eager to present their
problems, sometimes in an exaggerated way, in an attempt to
convince the clinician that they suffered from DID (Boon and
Draijer, 1995; Draijer and Boon, 1999). Similar observations
were expressed by Thomas (2001) saying that people with
imitated DID can present their history chronologically, using
the first person even when they are highly distressed or allegedly
presenting an altered personality, and are comfortable with
disclosing information about experiences of abuse. They can
talk about intrusions of dissociative parts, hearing voices or
difficulties controlling emotions, without shame.
Unfortunately, ICD-10, ICD-11, and DSM-5 offer no specific
guidelines on how to differentiate patients with personality
disorders and dissociative disorders by the manner in which
they report symptoms. There are also limited instruments to
distinguish between false-positive and false-negative DID. From
the clinical perspective, it is also crucial to understand the motives
for being diagnosed with DID, and disappointment when this
diagnosis is disconfirmed. Accurate assessment can contribute to
developing appropriate psychotherapeutic procedures (Boon and
Draijer, 1995; Draijer and Boon, 1999). Apart from observations
already referred to earlier in this article, there are no qualitative
analyses of false-positive DID cases in the past 20 years.
Most research was quantitative and compared DID patients
and simulators in terms of cognitive functions (Boysen and
VanBergen, 2014). This interpretative phenomenological analysis
is an idiographic study which explores personal experiences and
meaning attributed to conflicting emotions and behaviors in
six women who had previously been diagnosed with DID and
referred to the Research Centre for Trauma and Dissociation for
re-evaluation. It explores how they came to believe they have DID
and what had led clinicians to assume that these patients could be
suffering from this disorder.
Procedure
This study is part of a larger project examining alterations
in consciousness and dissociative symptoms in clinical and
non-clinical groups, held at the Research Centre for Trauma
& Dissociation, financed by the National Science Centre, and
approved by the Ethical Review Board at the SWPS University
of Social Sciences & Humanities. Potential candidates enrolled
themselves or were registered by healthcare providers via an
application integrated with the website www.e-psyche.eu. They
filled in demographic information and completed online tests,
including: Somatoform Dissociation Questionnaire (SDQ-20,
Pietkiewicz et al., 2018) and Trauma Experiences Checklist
(Nijenhuis et al., 2002). Those with elevated SDQ-20 scores
(above 28 points) or those referred for differential diagnosis were
consulted and if dissociative symptoms were confirmed, they
were invited to participate in an in-depth clinical assessment
including a series of interviews, video-recorded and performed at
the researcher’s office by the first author who is a psychotherapist
and supervisor experienced in the dissociation field. In Poland,
there are no gold standards for diagnosing dissociative disorders.
The first interview was semi-structured, open-ended and
explored the patient’s history, main complaints and motives for
participation. It included questions such as: What made you
participate in this study? What are your main difficulties or
symptoms in daily life? What do you think caused them? Further
questions were then asked to explore participants’ experiences
and meaning-making. This was followed by the Trauma and
Dissociation Symptoms Interview (TADS-I, Boon and Matthess,
2017). The TADS-I is a new semi-structured interview intended
to identify DSM-5 and ICD-11 dissociative disorders. The
TADS-I differs in several ways from other semi-structured
interviews for the assessment of dissociative disorders. Firstly,
it includes a significant section on somatoform dissociative
symptoms. Secondly, it includes a section addressing other
trauma-related symptoms for several reasons: (1) to obtain a
more comprehensive clinical picture of possible comorbidities,
including symptoms of PTSD and complex PTSD, (2) to gain
a better insight into the (possible) dissociative organization of
the personality: patient’s dissociative parts hold many of these
comorbid symptoms and amnesia, voices or depersonalisation
experiences are often associated with these symptoms; and (3)
to better distinguish between complex dissociative disorders,
personality disorders and other Axis I disorders and false positive
DID. Finally, the TADS-I also aims to distinguish between
symptoms of pathological dissociation indicating a division of
the personality and symptoms which are related to a narrowing
or a lowering of consciousness, and not to the structural
dissociation of the personality. Validation testing of the TADS-I
is currently underway. TADS interviews ranging from 2 to 4 h
were usually held in sessions of 90 min. Interview recordings
were assessed by three healthcare professionals experienced in
the dissociation field, who discussed each case and consensually
came up with a diagnosis based on ICD-10. An additional mental
state examination was performed by the third author who is
a psychiatrist, also experienced in the differential diagnosis of
dissociative disorders. He collected medical data, double-checked
the most important symptoms, communicated the results and
discussed treatment indications. Qualitative data collected from
MATERIALS AND METHODS
This study was carried out in Poland in 2018 and
2019. Rich qualitative material collected during in-depth
clinical assessments was subjected to the interpretative
phenomenological analysis (IPA), a popular methodological
framework in psychology for exploring people’s personal
experiences and interpretations of phenomena (Smith and
Osborn, 2008). IPA was selected to build a deeper understanding
of how patients who endorsed and identified with dissociative
identity disorder made sense of the diagnosis and what
it meant for them to be classified as false-positive cases
during reassessment.
Interpretative
phenomenological
analysis
uses
phenomenological, hermeneutic, and idiographic principles. It
employs ‘double hermeneutics,’ in which participants share their
experiences and interpretations, followed by researchers trying
to make sense and comment on these interpretations. IPA uses
small, homogenous, purposefully selected samples, and data
are carefully analyzed case-by-case (Smith and Osborn, 2008;
Pietkiewicz and Smith, 2014).
Frontiers in Psychology | www.frontiersin.org
3
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
who also developed the TADS-I. They are all mentors and
trainers of the European Society for Trauma and Dissociation,
with significant expertise in the assessment of post-traumatic
conditions. The first co-investigator (AB) has a master’s degree in
psychology and is a Ph.D. candidate. She is also a psychotherapist
in training. All authors coded and discussed their understanding
of data. Their understanding and interpretations of symptoms
reported by participants were influenced by their background
knowledge and experience in diagnosing and treating patients
with personality disorders and dissociative disorders.
six patients out of 85 were selected for this interpretative
phenomenological analysis, based on the following criteria for
inclusion, which could ensure a homogenous sample expected of
IPA studies – (a) female, (b) previously diagnosed or referred to
rule in/out DID, (c) endorsement and identification with DID, (d)
dissociative disorder disconfirmed in the assessment. Interviews
with every participant in this study ranged from 3 h 15 min to 7 h
20 min (mean: 6 h).
Participants
Participants of this IPA were six female patients aged between
22 and 42 years who were selected out of 86 people examined
in a larger study exploring dissociation and alterations in
consciousness in clinical and non-clinical groups. (Participants
in the larger study met criteria of different diagnoses and
seven among them had ‘genuine’ DID). These six patients did
not meet DID criteria on the TADS-I interview but believed
themselves that they qualified for that diagnosis. Four of them
had higher education, two were secondary school graduates.
All of them registered in the study by themselves hoping to
confirm their diagnosis but two (Olga and Katia) were referred
by psychiatrists, and the others by psychotherapists. All of them
traveled from far away, which showed their strong motivation
to participate in the assessment. Four had previously had
psychiatric treatment and five had been in psychotherapy due
to problems with emotional regulation and relationships. In
the cases of Victoria and Dominique, psychotherapy involved
working with dissociative parts. None of them recalled any
physical or sexual abuse, but three (Dominique, Victoria, and
Mary), following therapists’ suggestions, were trying to seek
such traumatic memories to justify their diagnosis. They all felt
emotionally neglected by carriers in childhood and emotionally
abused by significant others. None of them reported symptoms
indicating the existence of autonomous dissociative parts.
None had symptoms indicating amnesia for daily events, but
four declared not remembering single situations associated
with conflicting emotions, shame, guilt, or conversations
during which they were more focused on internal experiences
rather than their interlocutors. None experienced PTSD
symptoms (e.g., intrusive traumatic memories and avoidance),
autoscopic phenomena (e.g., out-of-body experiences), or
clinically significant somatoform symptoms. None had
auditory verbal hallucinations but four intensely engaged in
daydreaming and experienced imagined conversations as very
real. All of them had been seeking information about DID
in literature and the Internet. For more information about
them see Table 2. Their names have been changed to protect
their confidentiality.
Data Analysis
Verbatim transcriptions were made of all video recordings, which
were analyzed together with researchers’ notes using qualitative
data-analysis software – NVivo11. Consecutive analytical steps
recommended for IPA were employed in the study (Pietkiewicz
and Smith, 2014). For each interview, researchers watched
the recording and carefully read the transcript several times.
They individually made notes about body language, facial
expressions, the content and language use, and wrote down
their interpretative comments using the ‘annotation’ feature
in NVivo10. Next, they categorized their notes into emergent
themes by allocating descriptive labels (nodes). The team then
compared and discussed their coding and interpretations. They
analyzed connections between themes in each interview and
between cases, and grouped themes according to conceptual
similarities into main themes and sub-themes.
Credibility Checks
During each interview, participants were encouraged to give
examples illustrating reported symptoms or experiences.
Clarification questions were asked to negotiate the meaning
participants wanted to convey. At the end of the interview,
they were also asked questions to check that their responses
were thorough. The researchers discussed each case thoroughly
and also compared their interpretative notes to compare
their understanding of the content and its meaning (the
second hermeneutics).
RESULTS
Participants in this study explained how they concluded they
were suffering from DID, developed knowledge about the
syndrome and an identity of a DID patient, and how this affected
their everyday life and relationships. Five salient themes appeared
in all interviews, as listed in Table 3. Each theme is discussed
and illustrated with verbatim excerpts from the interviews, in
accordance with IPA principles.
The Researchers
Theme 1: Endorsement and
Identification With the Diagnosis
The principal investigator (IJP) is a psychotherapist, supervisor,
and researcher in the field of community health psychology
and clinical psychology. The second co-investigator (RT) is
a psychiatrist, psychotherapist, and supervisor. The third coinvestigator (SB) is a clinical psychologist, psychotherapist,
supervisor, and a consulting expert in forensic psychology,
Frontiers in Psychology | www.frontiersin.org
All six participants hoped to confirm they had DID. They
read books and browsed the Internet seeking information about
dissociation, and watched YouTube videos presenting people
describing multiple personalities. Dominique, Victoria, Mary,
4
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
TABLE 2 | Study participants.
Name
Participant’s characteristics
Victoria
Age 22, single, lives with parents and younger brother. Stopped her studies after 3 years and was hospitalized in a psychiatric facility for a short period
due to problems with emotions and relationships. Reports difficulties with recognizing and expressing emotions, emptiness, feels easily hurt and
rejected, afraid of abandonment. Perceives herself as unimportant and worthless, sometimes cuts herself for emotional relief. Maintains superficial
relationships, does not trust people; in childhood was frequently left alone with grandparents because her parents traveed; described her parents as
setting high expectations, mother as getting easily upset and impulsive. No substance use. No history of physical or sexual trauma. Her maternal
grandfather abused alcohol but was not violent; no history of suicides in her family. Scored 38 points in SDQ-20 but no significant somatoform
symptoms reported during clinical assessment.
Karina
Age 22, single, secondary education. Enrolled in university programs twice but stopped. Acting is a hobby; recently worked as a waitress or hostess,
currently unemployed. Has had psychiatric treatment for 17 years due to anxiety and problems in relationships. Two short hospital admissions; in
psychodynamic psychotherapy in last 2 years. Reports emotional instability, feeling depressed, anxious, and lonely; maintains few relationships;
experiences conflicts with expressing anger and needs for dependency, no self-harm. She had periods of using alcohol excessively in the past, currently
once a month, no drugs. No family members used psychiatric help. Reports abandonment, emotional and physical abuse in childhood and eagerly
talks about these experiences. Scored 68 points in SDQ-20 but no significant somatoform symptoms reported during clinical assessment.
Dominique
Age 33, higher education, married, three children. Works as a playwright, comes from an artistic family. Was given away to her grandparents as a baby
and returned to parents and brothers when she was seven; often felt abandoned and neglected. She had learning difficulties and problems in
relationships, mood regulation, auto-aggressive behavior, feelings of emptiness and loneliness. Denies using alcohol or drugs; at secondary school
abused marihuana. Her paternal grandmother had psychosis, her father abused marihuana and mother was treated for depression. Reports poverty at
home. No suicides in family. Often retreated into her fantasy world in which she developed a story about boys kept in a resocialisation center. Has had
psychiatric treatment and counseling for 20 years. Scored 52 points in SDQ-20 but no somatoform symptoms confirmed during clinical assessment.
Mary
Age 34, higher education, married. Works in the creative industry and engaged in proselytic activities as an active Jehovah’s Witness (joined the
organization 10 years earlier, encouraged by her mother). Has had EMDR therapy for 2 years due to problems maintaining relationships and managing
anger. When her therapist asked if she felt there were different parts inside her, she started exploring information about DID. She denies smoking or
using any drugs, alcohol. Mother suffered from mild depression. No suicides in family. Scored 48 points in SDQ-20 but no somatoform symptoms
confirmed during clinical assessment.
Olga
Age 40, higher education, single. Works in social care. Reports depressive mood, low self-esteem, difficulties with concentration, problems with social
contacts. Occasionally uses alcohol in small doses, no drugs. Describes her mother as demanding but also distant and negligent because she was
busy with her medical practice. Father withdrawn and depressed but never used psychiatric treatment. No other trauma history. No suicides in family.
Tried psychotherapy four times but usually terminated treatment after a while. Her psychiatrist referred her for evaluation of memory problems, and
confirming DID. Scored 31 points in SDQ-20; confirms a few somatoform symptoms: headaches, symptoms associated with cystitis, detachment from
bodily sensations.
Katia
Age 42, post-graduate education. Unemployed. On social benefits for 15 years due to neurological and pulmonary symptoms, complications after
urological surgeries. Reports low self-esteem, self-loathing, problems in establishing or maintaining relationships, feeling lonely, rejected and not
understood. Inclinations toward passive-aggressive behavior toward people representing authority, fatigue, insecurity about her financial situation.
Reports no alcohol or drug use. Mother treated for depression. No suicides in family. Scored 69 points in SDQ-20; multiple somatic complaints
associated with Lyme disease, describes mother as emotionally and physically abusive, and father as abandoning and unprotecting. Has never used
psychotherapy; was referred for consultation by a psychiatrist after persuading him that she had DID symptoms.
Participants names have been changed to protect their confidentiality.
During an argument with my mother I felt as if some incredible
force took control and I smashed the glass in the cabinet with my
hand. It was like being under control of an alien force. I started
reading about borderline and I thought I had it. I found a webpage
about that and told my mother I should see a psychiatrist. I went
for a consultation and told her my story. This lady said: “Child,
you don’t have borderline, but multiple personality.” She wanted
to keep me in the psychiatric unit but I did not agree to stay for
observation. (Dominique).
TABLE 3 | Salient themes identified during the interpretative
phenomenological analysis.
Theme 1:
Endorsement and identification with the diagnosis
Theme 2:
Using the notion of dissociative parts to justify identity confusion
and conflicting ego-states
Theme 3:
Gaining knowledge about DID affects the clinical presentation
Theme 4:
Fragmented personality becomes an important discussion topic
with others
Theme 5:
Ruling out DID leads to disappointment or anger.
This led Dominique to research the new diagnosis. Karina also
said she was encouraged to seek information about DID, when a
doctor suggested she might be suffering with it.
When I was 11, I had problems at school and home. Other
children made fun of me. My mom took me to a doctor and he
said I had borderline, but later I was diagnosed with an anxiety
disorder. That doctor also suggested I had DID and told me that I
should read more about this diagnosis. (Karina).
and Karina said that a mental health professional suggested
this diagnosis to them. Dominique remembers consulting a
psychiatrist when she was 15, because she had problems
controlling anger at home or in public places. She initially
found descriptions of borderline personality captured her
experiences well enough, but a psychiatrist refuted the idea and
recommended further diagnostics toward a dissociative disorder.
However, the girl refused to go to hospital for observation.
Frontiers in Psychology | www.frontiersin.org
Victoria and Mary shared similar stories about
psychotherapists suggesting the existence of dissociative parts,
having readily accepted this new category as a good explanation
5
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
for aggressive impulses or problems with recalling situations
evoking guilt or shame. Dominique and Victoria stressed,
however, that, apart from feeling emotionally abandoned, they
could not trace any significant traumas in their early childhoods,
although therapists maintained that such events must be present
in dissociative patients.
different expectations. Whoever comes up front, then I have these
ideas. (Dominique).
Dominique neither had amnesia nor found evidence for
leading separate lives and engaging herself in activities associated
with her characters. She maintained her job as a playwright, and
merely imagined alternative scenarios of her life, expressed by
her inner heroes. In other parts of the interview, she referred
to them as ‘voices inside,’ but admitted she never heard them
acoustically. They were her own vivid thoughts representing
different, conflicting opinions or impulses.
Katia said she felt internally fragmented. There were times
when she engaged in certain interests, knowledge and skills, but
she later changed her goals. Fifteen years ago she gave up her
academic career and went on sickness benefit when she became
disabled due to medical problems; she experienced this as a great
loss, a failure, which affected her sense of identity and purpose.
I have no idea why I have this [DID]. My therapist looked for
evidence of childhood trauma, which sounds like the easiest
explanation, but I don’t feel I had any horrific memories which
I threw out of my consciousness. (Victoria).
Katia and Olga had used psychiatric treatment for anxiety
and depression for years. After exploring information about
different mental disorders they concluded they had DID.
They thought there was a similarity between their personal
experiences and those of people publishing testimonials about
multiple personalities.
In recent years I have a growing sense of identity fragmentation. I
have problems with defining my identity because it changes. I used
to feel more stable in the past. I had these versions of myself which
were more dominating, so I had a stronger sense of identity. For
example, 20 years ago there was this scientist. I was studying and
felt like a scientist, attending conferences. Now I don’t have that
and I don’t know who I am. [. . .] I also have changing interests and
hobbies because of different personalities. Long ago I liked certain
music, played the guitar, sang songs. I don’t do that anymore, I
suddenly lost interest in all that. (Katia).
I tried to understand this battle inside, leading me to stagnation.
I didn’t know how to describe that but I recently bought a book
Healing the fragmented selves of trauma survivors, and everything
was explained there. Some of these things I have discovered myself
and some were new to me. (Olga).
Subsequently, Katia presented to her doctor a review
of literature about DID, trying to persuade him that she
had this disorder.
Theme 2: Using the Notion of
Dissociative Parts to Justify Identity
Confusion and Conflicting Ego-States
She described changes in her professional and social lives
in terms of switches between dissociative parts. Although she
maintained the first person narrative (“I was studying,” “I played,”
or “I sang”), indicating some sense of continuity, she thought it
proved the existence of two or more distinct personalities.
Participants also reported thoughts, temptations, impulses or
actions which seemed to evoke conflicting feelings. Attributing
them to ‘something inside that is not-me’ could free them from
guilt or shame, so they used a metaphor of someone taking over,
logging in, or switching. Dominique thought it was inappropriate
to express disappointment or anger, but she accepted the thought
that her dissociative parts were doing this.
Once participants had embraced the idea of having multiple
personalities, they seemed to construct inner reality and justify
conflicting needs, impulses or behaviors as an expression of
dissociative parts. They referred to being uncertain about who
they were and having difficulties recognizing personal emotions,
needs or interests. Some of them felt it was connected to a
negative cognition about themselves as worthless, unimportant,
and not deserving to express what they felt or wanted. Victoria
said she would rather define herself through the eyes of others:
When I’m angry at my therapist, it is not really me but somebody
inside who gets angry easily. Greg often switches on in such
situations and says: “Tell her this and this”. [. . .] I went to a shop
once and discovered that the price on the label was not for a whole
package of batteries but a single one. And suddenly Greg switched
on and had a row with the cashier. I mean, I did it, but wound up
by his anger. This is so weird, I wouldn’t react like that. They just
charged incorrectly and I would normally ignore that but Greg
said: “I give a shit about their mistakes. I won’t accept that.” What
a failure! (Dominique).
My therapist asked what I wanted or needed. It turned out that
without other people’s expectations or preferences to which I
normally adjust, I wouldn’t know who I am or what I want. I
usually engage in my friends’ hobbies and do what I think gives
them pleasure. Otherwise, I think they will not like me and reject
me, because I have nothing to offer. (Victoria).
Since a young age, Dominique tended to immerse herself in
a fantasy world, developing elaborated scenarios about people
living in a youth center administered by a vicious boss. Different
characters in her ‘Story’ represented specific features, interests
and plans she had.
Mary said she had parts that expressed anger, sadness,
and needs associated with attachment. She observed them and
allowed them to step in, when situations required.
Well, there is John who is a teacher and researcher. He teaches
mathematics. I have no skills in maths at all. Tim is a philosopher
and would like to train philosophers, enroll doctoral studies. He
would like me to study philosophy but the rest of the system
wants me to be a worrier. Ralf is a caring nurse and would
like to become a paramedic. It is difficult to reconcile all these
Frontiers in Psychology | www.frontiersin.org
There were situations in my life when the teenager must have been
active. She protected me. She is ready to fight; I am not like that
at all. I hate violence, and that teenager likes using force to protect
me. [. . .] My therapist suggested I call her after this interview if I
6
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
but not necessarily related to trauma. Katia said she recently
remembered the picture of the house and garden where she
played as a child and associated these experiences with moments
of joy. Karina also exemplified her flashbacks with ‘intrusions of
happy memories’ which belonged to other personalities:
do not feel well. I didn’t accept that but the [inner] girls got upset
and told me I needed her help. They made me comply, so I agreed
to call her if I do not feel well. It has always been like this. (Mary).
During assessment, no participant provided evidence for the
existence of autonomous dissociative parts. It seems that the
inner characters described by them personified unintegrated egostates which used to evoke conflicting feelings.
Sometimes I begin to laugh but this is not my laughter, but the
laughter of sheer joy. Someone inside me is very happy and wants
to talk about happy childhood memories, make jokes. (Karina).
Theme 3: Exploring Personal
Experiences via the Lens of Dissociation
Mary said a child part of her was responsible for flashbacks and
making comments about current situations. However, she later
denied hearing voices or having any other Schneider’s symptoms.
Reading books, websites and watching videos of people who
claimed to have DID, encouraged them to compare themselves,
talk about and express ‘multiple personalities.’ The participants
became familiar with specialist terms and learned about core
symptoms mentioned in psychiatric manuals.
I can hear her comments, that she does not like something. I can
be flooded by emotions and have flashbacks associated with that
child. For example, there is a trigger and I can see things that
this child has seen. She is showing me what was happening in
her life. (Mary).
I read First person plural which helped me understand what this
is all about. The drama of the gifted child and The body keeps the
score. More and more girls started to appear. There is a 6-month
old baby which showed up only 2 months ago, a sad 11-year
old teenager, and a 16-year old who thinks I am a loser. I was
a teenager like that. Now she is having problems and becoming
withdrawn there are fewer switches, because she knows we need
to help the little one first. (Mary).
Participants discussed their dissociative parts, their names and
features, exhibiting neither avoidance nor fear or shame. On
the contrary, they seemed to draw pleasure by smiling, showing
excitement and eagerness to produce more examples of their
unusual experiences. At the beginning of the interview, Karina
was very enthusiastic and said, “My heart is beating so fast, as if I
were in fight-or-flight mode.”
Olga was also inspired by books. Not only did she find
similarities to trauma survivors but she made new discoveries
and thought there were other experiences she had been unaware
of earlier. Victoria started using techniques which literature
recommended for stabilization in dissociative disorders. She
said these books helped her understand intense emotions and
improve concentration.
Theme 4: Talking About DID Attracts
Attention
Not only were multiple personalities a helpful metaphor for
expressing conflicting feelings or needs (already mentioned
in Theme 2), but they also became an important topic of
conversations with family or friends.
This explains everything that happens to me, why I get so angry.
I also found anchors helpful. I focus on certain objects, sounds or
smells which remind me where I am, instead of drifting away into
my thoughts. (Victoria).
My husband says sometimes: “I would like to talk to the little girl.”
He then says that I start behaving differently. I also talk to my
therapist using different voices. Sometimes, she addresses them
asking questions. If questions are asked directly, they respond, but
there are times I do not allow them to speak, because the teenager
part can be very mean and attacks people. (Mary).
It seemed that exploring information about DID encouraged
changes in participants’ clinical presentation. At first, they
merely struggled with emotional liability or detachment, internal
conflicts, and concentration problems. Later, they started
reporting intrusions of dissociative parts or using clinical terms
(e.g., flashback) for experiences which were not necessarily
clinical symptoms. Dominique said that the characters of her
story would often ‘log in’ and take control. She demonstrated
that during the interview by changing her voice and going
into a ‘trance.’ She created her own metaphors, explaining
these experiences and comparing them with those described in
literature. She stressed that she never had amnesia and remained
aware of what was happening during her ‘trance.’
It may have been easier for Mary to express her needs for
dependency and care by ascribing them to a little girl and,
because she felt awkward about feeling angry with the therapist,
attributing hostile impulses to a teenager could give her a sense
of control and reduce guilt. Karina decided to create a videoblog for documenting dissociative parts, and shared her videos
with people interested in DID. She said she was surprised to find
clips in which she looked dreadful, having her make-up smeared
all over the face, because she had no memory of doing that.
However, she showed no signs that it bothered her. She discussed
the videos with her best friend, a DID fan who had encouraged
her to enroll in the study in order to confirm her diagnosis.
They were collecting evidence to support the idea that she had
a dissociative disorder, which she presented one by one, before
being asked about details.
I think it is a form of dissociation on the emotional level. I read a
lot. . . The minds of Billy Milligan or First person plural. For sure, I
do not have an alteration of personality. I have co-consciousness.
My theory is, we are like a glove, we all stem from one trunk, but
we are like separate fingers. (Dominique).
Mark [her friend] reads a lot about DID. He says I sometimes talk
in a high voice which is not the way I usually talk. He refers to
us as plural. [. . .] In some of these videos I do not move or blink
While participants maintained they had flashbacks, they
understood them as sudden recollections of past memories
Frontiers in Psychology | www.frontiersin.org
7
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
for a minute. I look at some point and there is no expression on
my face. I can remember things until this moment, and later I
discover myself looking like something from Creepypastas. I am
so sorry for people who have to see this. . . and I found my diary.
I have been writing diaries since I was seven. I sometimes have no
memory for having written something. I need to find these notes
because I would like to write a book about a fantasy world and
inner conflicts. (Karina).
another possibility. It is some information but I have not heard
anything new. (Karina).
Only Victoria seemed relieved that her DID diagnosis was not
confirmed. She was happy to discuss how attachment problems or
conflicts with expressing emotions and needs affected her social
life and career, and receive guidelines for future treatment. She
felt liberated from having to uncover childhood traumas that her
therapist expected her to have as a dissociative patient.
Dominique and Katia also wrote journals to record
dissociative experiences. Katia hoped to be recognized as
an expert-by-experience and develop her career in relation
to that. She brought with her a script of a book she hoped
to publish 1 day.
I was hoping that you would find another explanation for my
problems. . . for what is wrong with me, why I feel so sensitive
or spaced out, because it is annoying. I would like to know what is
going on. I don’t think I’ve had any severe trauma but everybody
wants to talk about trauma all the time. (Victoria).
Theme 5: Ruling Out DID Leads to
Disappointment or Anger
DISCUSSION
Four participants were openly disappointed that their DID
diagnosis was not confirmed. They doubted if their descriptions
were accurate enough, or they challenged the interviewer’s
understanding of the symptoms. Katia also suggested that she
was incapable of providing appropriate answers supporting her
diagnosis due to amnesia and personality alterations.
ICD-10 and DSM-5 provide inadequate criteria for diagnosing
DID, basically limited to patients having distinct dissociative
identities with their own memories, preferences and behavioral
patterns, and episodes of amnesia (American Psychiatric
Association, 2013; World Health Organization, 1993). Clinicians
without experience of DID may therefore expect patients
to present disruptions of identity during a consultation and
spontaneously report memory problems. However, trauma
specialists view DID as a ‘disorder of hiddenness’ because patients
often find their dissociative symptoms bizarre and confusing and
do not disclose them readily due to their shame and the phobia
of inner experiences (Steele et al., 2005, 2016; Van der Hart et al.,
2006). Instead, they tend to undermine their significance, hide
them and not report them during consultations unless asked
about them directly. Dissociative patients can also be unaware
of their amnesia and ignore evidence for having done things
they cannot remember because realizing that is too upsetting.
Contrary to that, this study and the one conducted in 1999 in
the Netherlands by Draijer and Boon, show that some people
with personality disorders enthusiastically report DID symptoms
by the book, and use the notion of multiple personalities to
justify problems with emotional regulation, inner conflicts, or
to seek attention. As with Dutch patients, Polish participants
were preoccupied with their alternate personalities and two
tried to present a ‘switch’ between parts. Their presentations
were naïve and often mixed with lay information on DID.
However, what they reported could be misleading for clinicians
inexperienced in the dissociation field or those lacking the
appropriate tools to distinguish a genuine dissociative disorder
from an imitated one.
Therefore, understanding the subtleties about DID clinical
presentation, especially those which are not thoroughly described
in psychiatric manuals, is important to come up with a correct
diagnosis and treatment plan. Various clinicians stress the
importance of understanding the quality of symptoms and
the mechanisms behind them in order to distinguish on the
phenomenological level between borderline and DID patients
(Boon and Draijer, 1993; Laddis et al., 2017). Participants in
this study reported problems with identity, affect regulation
Do you even consider that I might give different answers if
you had asked these questions 2 or 5 years ago? I must have
erased some examples from my memory and not all experiences
belong to me. I know that people can unconsciously modify their
narratives and that is why I wanted an objective assessment.
[. . .] Nobody believed I was resistant to anesthetics until I was
diagnosed with some abnormalities. It was once written in my
medical report that I was a hypochondriac. One signature and
things become clear to everyone. Sometimes it is better to have
the worst diagnosis, but have it. (Katia).
She expected that the diagnosis would legitimize her
inability to establish satisfactory relationships, work, and become
financially independent. For this reason, she also insisted that the
final report produced for her should contain information about
how she felt maltreated by family or doctors, and revealed her
hopes to claim damages for health injury. Mary and Karina were
also upset that the interviewers did not believe they had DID.
Can you try to imagine how hard it is? I am not making things
up? You don’t believe me. I am telling you things and you must
be thinking, from the adult perspective: “You are making this up.”
Nothing pisses me off more than someone who is trying to prove
to others that they have just imagined things. They [dissociative
parts] feel neglected again, as always! (Mary).
Karina tried to hide her disappointment and claimed she was
glad she didn’t have a severe mental illness. However, she thought
she would need to build another theory explaining her symptoms.
After the interview, she sent more videos trying to prove the
assessment results were not accurate.
What about my problems then? I am unable to set boundaries,
I have anxiety, I fear that a war might break out. If this
is not dissociation, then what? I had tests and they ruled
out any neurological problems. I came here and ruled out
Frontiers in Psychology | www.frontiersin.org
8
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
dissociative parts which are stuck in trauma. In addition
to avoidance, this is another characteristic PTSD feature
observed in the clinical presentation of DID patients (Van
der Hart et al., 2010). Interestingly, participants in this
study showed no evidence for intrusions (images, emotions
or somatosensory experiences directly related to trauma),
but rather problems with emotional regulation (illustrated in
sections “Themes 1 and 2”). Asked about intrusive images,
emotions or thoughts, some gave examples of distressing
thoughts attacking self-image and blaming for their behavior.
This, however, was related to attachment problems and
difficulties with self-soothing. They also revealed a tendency
to indulge themselves in these auto-critical thoughts instead of
actively avoiding them, which is often a case in dissociative
patients. Some intrusions reported by DID patients are
somatoform in nature and connected with dissociative parts
stuck in trauma time (Pietkiewicz et al., 2018). Although
three participants in this study had very high scores in
SDQ-20 indicating that they may have a dissociative disorder
(scores of 50–60 are common in DID), further interviews
revealed that they aggravated their symptoms and, in fact,
had low levels of somatoform dissociation. This shows that
tests results should be interpreted with caution and clinicians
should always ask patients for specific examples of the
symptoms they report.
and internal conflicts about expressing their impulses. Some
of them also had somatic complaints. These symptoms are
common in personality disorders and also in dissociative
disorders, which are polysymptomatic by nature. However,
the quality of these symptoms and psychological mechanisms
behind them may be different. For a differential diagnosis,
clinicians need to become familiar with the unique internal
dynamics in people who have developed a structural dissociation
of personality as a result of trauma. These patients try to
cope with everyday life and avoid actively thinking about
and discussing traumatic memories, or experiencing symptoms
associated with them. Because of that avoidance, they find
it challenging to talk about dissociative symptoms with a
clinician. Besides experiencing fear of being labeled as insane
and sent to hospital, there may be internal conflicts associated
with disclosing information. For example, dissociative parts
may forbid them to talk about symptoms or past experiences.
This conflict can sometimes be indicated by facial expression,
involuntary movements, spasms, and also felt by the clinician
in his or her countertransference. In other words, it is not
only what patients say about their experiences, but how they
do this. Therapists’ observations and countertransference may
help in assessing the quality of avoidance: How openly or easily
do patients report symptoms or adverse life experiences? Is
that associated with strong depersonalisation (detachment from
feelings and sensations, being absent)? Is there evidence for
internal conflicts, shame, fear or feeling blocked when talking
about symptoms (often observed in facial expression, tone of
voice)? Participants in this study were eager to talk about how
others mistreated them and wanted to have that documented
on paper. Difficult experiences in the past sometimes triggered
intense emotions in them (anger, resentment, and deep sadness)
but they did not avoid exploring and communicating these
states. On the contrary, they eagerly shared an elaborate
narrative of their sorrows and about their inner characters –
the multiple personalities they were convinced they had.
They became keen on DID and used a variety of resources
to familiarize themselves with core symptoms. They also
spontaneously reported them, as if they wanted to provide
sound evidence about having DID and were ready to defend
their diagnosis. Some planned their future based on it (an
academic career, writing a book, or a film). During the
interviews, it became clear that some perceived having an
exotic diagnosis as an opportunity for seeking attention and
feeling unique, exhibiting the drama of an ‘unseen child’ (see
section “Theme 4”).
Understanding a few of the symptoms identified in this
study can be useful for differential diagnosis: intrusions,
voices, switches, amnesia, use of language, depersonalisation.
How they are presented by patients and interpreted by
clinicians is important.
Voices
It is common for DID patients to experience auditory
hallucinations (Dorahy et al., 2009; Longden et al., 2019).
The voices usually belong to dissociative parts and comment
on actions, express needs, likes and dislikes, and encourage
self-mutilation. Subsequently, there may be conflicts between
‘voices,’ and the relationship with them is quite complex.
Dorahy et al., 2009 observe that auditory hallucinations
are more common in DID than in schizophrenia. In
dissociative patients they are more complex and responsive,
and already appear in childhood. Specifically, child voices
are also to be expected in DID (97% in comparison to 6%
in psychosis). None of our participants reported auditory
hallucinations although one (Dominique) said she had
imaginary friends from childhood. While this could sound
like a dissociative experience, exploring their experiences
showed she had a tendency to absorb herself in her fantasy
world and vividly imagine characters in her story (see
section “Theme 2”).
Switches
Literature also shows that it is uncommon for avoidant
dissociative patients to present autonomous dissociative parts
to a therapist before a good relationship has been established
and the phobia for inner experiences reduced (Steele et al.,
2005). Sudden switches between dissociative personalities may
occur only when the patient is triggered and cannot exercise
enough control to hide his or her symptoms. Two participants
in this study (Dominique and Karina) tried to present ‘alternate
personalities’ and they actually announced this would happen,
so that the interviewer did not miss them. Later on, they could
Intrusions
Triggered by external or internal factors (memories or anything
associated with trauma) dissociative patients tend to relive
traumatic experiences. In other words, they have intrusive
memories, emotions or sensorimotor sensations contained by
Frontiers in Psychology | www.frontiersin.org
9
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
attacks to other parts, not-me (see: Dominique in section
“Theme 2”). One might suspect it could be evidence for
autonomous dissociative parts. However, these participants seem
to have had unintegrated, unaccepted self-states and used the
concept of DID to make meaning of their internal conflicts.
In their narrative they maintained the first-person narrative.
None of them provided sound evidence for extreme forms of
depersonalisation, such as not feeling the body altogether or
out-of-body experiences.
There can be many reasons why people develop symptoms
which resemble those typical of DID. Suggestions about a
dissociative disorder made by healthcare providers can help
people justify and explain inner conflicts or interpersonal
problems. In this study several clinicians had suggested a
dissociative disorder or DID to the patient. Literature on
multiple personalities and therapy focused on them, and
using expressions such as ‘parts’, ‘dissociating’, ‘switches,’ can
also encourage demonstrating such symptoms. There are also
secondary gains explained in this study, such as receiving
attention and care. Draijer and Boon (1999) observe that
people with borderline features justified shameful behavior
and avoided responsibility by attributing their actions to
‘alter personalities.’ Such people can declare amnesia for
their outbursts of anger, or hitting partners. Others explained
their identity confusion and extreme emptiness using the
DID model. All their participants reported emotional neglect
and felt unseen in their childhood, so they adopted a
new DID-patient identity to fill up inner emptiness (Draijer
and Boon, 1999). Just like the participants in this study,
they were angry when that diagnosis was disconfirmed
during the assessment, as if the clinician had taken away
something precious from them. This shows that communicating
the results should be done with understanding, empathy
and care. Patients and clinicians need to understand and
discuss reasons for developing a DID-patient identity, its
advantages and pitfalls.
In countries where clinicians are less familiar with the
dissociative pathology, there may be a greater risk for both falsenegative and false-positive DID diagnoses. The latter is caused
by the growing popularity of that disorder in media and social
networks. People who try to make meaning of their emotional
conflicts, attachment problems and difficulties in establishing
satisfactory relationships, may find the DID concept attractive.
It is important that clinicians who rule out or disconfirm DID,
also provide patients with friendly feedback that encourages
using treatment for their actual problems. Nevertheless, this
may still evoke strong reactions in patients whose feelings
and needs have been neglected, rejected or invalidated by
significant others. Disconfirming DID may be experienced by
them as an attack, taking something away from them, or an
indication that they lie.
relate to what happened during the alleged switch (no amnesia),
maintaining the first-person perspective (I was saying/doing).
Contrary to that, dissociative patients experience much shame
and fear of disclosing their internal parts (Draijer and Boon,
1999). If they become aware that switches had occurred, they try
to make reasonable explanations for the intrusions of parts and
unusual behavior (e.g., I must have been very tired and affected
by the new medicine I am taking).
Amnesia
Dell (2006) mentions various indicators of amnesia in patients
with DID. However, losing memory for unpleasant experiences
may occur in different disorders, usually for behaviors evoking
shame or guilt, or for actions under extreme stress (Laddis
et al., 2017). All patients in this study had problems with
emotional regulation and some said they could not remember
what they said or did when they became very upset. With
some priming, they could recall and describe events. For this
reason, it is recommended to explore evidence for amnesia for
pleasant or neutral activities (e.g., doing shopping or cleaning,
socializing). According to Laddis et al. (2017) there are different
mechanisms underlying memory problems in personality and
dissociative disorders.
Use of Language
Participants in this study often used clinical jargon (e.g.,
flashbacks, switches, and feeling depersonalized) which indicates
they had read about dissociative psychopathology or received
psycho-education. However, they often had lay understanding
of clinical terms. A good example in this study was having
‘flashbacks’ of neutral or pleasant situations which had once been
forgotten. Examples of nightmares did not necessarily indicate
reliving traumatic events during sleep (as in PTSD) but expressed
conflicts and agitation through symbolic, unrealistic, sometimes
upsetting dreams. When talking about behavior of other parts
and their preferences, they often maintained a first-person
perspective. Requesting patients to provide specific examples
is thus crucial.
Depersonalisation
Detachment from feelings and emotions, bodily sensations
and external reality is often present in various disorders
(Simeon and Abugel, 2006). While these phenomena have
been commonly associated with dissociation, Holmes et al.
(2005) stress the differences between detachment (which can
be experienced by both dissociative and non-dissociative
patients) and compartmentalisation, associated with the
existence of dissociative parts. Allen et al. (1999) also stress
that extreme absorptive detachment can interfere with noticing
feelings and bodily sensations, and also memory. Some
participants in this study tended to enter trance-like states
or get absorbed in their inner reality, subsequently getting
detached from bodily sensations. They also described their
feeling of emptiness in terms of detachment from feelings.
Nevertheless, none of them disclosed evidence for having
distinct dissociative parts. Some of their statements might
have been misleading; for example, when they attributed anger
Frontiers in Psychology | www.frontiersin.org
Limitations and Further Directions
Among the 85 people who participated in a thorough diagnostic
assessment, there were six false-positive DID cases, and this study
focused on their personal experiences and meaning attributed
to the diagnosis. Because IPA studies are highly idiographic,
10
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
TABLE 4 | Red flags for identifying false-positive or imitated DID.
This table enumerates suggestive features of false positive or imitated DID cases identified in this study, which should be taken into consideration during diagnostic
assessment.
1. Directly or indirectly expects to confirm self-diagnosed DID.
2. DID previously suggested by someone (friend, psychologist, and doctor) without thorough clinical assessment.
3. Keen on DID diagnosis and familiarized with symptoms: read books, watched videos, talked to other patients, participated in a support group for dissociative
patients.
4. Uses clinical jargon: parts, alters, dissociating, switch, depersonalisation, etc.
5. Reveals little avoidance: eagerly talks about painful experiences and dissociation, no indicators for genuine shame or inner conflicts associated with disclosing
symptoms or parts.
6. Readily justifies losing control of emotions and unacceptable or shameful behavior in terms of not being oneself or being influenced by an alternative personality.
7. No evidence for the intrusions of unwanted and avoided traumatic memories or re-experiencing them in the present.
8. Denies having ego-dystonic thoughts or voices, especially starting in early childhood and child-like voices.
Note: Dissociative patients may be afraid, ashamed, or feel it is forbidden to talk about the voices.
9. No evidence of amnesia for neutral or pleasant everyday activities, e.g., working, doing shopping, socializing, playing with children.
10. Tries to control the interview and provide evidence for having DID, e.g., eagerly reports dissociative symptoms without being asked about them.
11. Announces and performs a switch between personalities during clinical assessment, especially before a good relationship with the clinician and trust has been
established.
12. Finds apparent gains associated with having DID: receives special interest from family and friends with whom symptoms and personalities are eagerly discussed,
runs support groups, blogs or video channels for people with dissociative disorders.
13. Gets upset or disappointed when DID is not confirmed, e.g., demands re-evaluation, excuses oneself for not being accurate enough in giving right answers, wants
to provide more evidence.
which suggested it was probable they had a dissociative
disorder. However, during a clinical diagnostic interview
they did not report a cluster of somatoform or psychoform
dissociative symptoms and did not meet criteria for any
dissociative disorder diagnosis. Clinicians also need to go
beyond the face value of a patient’s responses, ask for specific
examples, and notice one’s own countertransference. Draijer
and Boon (1999) observed that DID patients were often
experienced by clinicians as very fragile, and exploring
symptoms with people with personality disorders (who try
to aggravate them and control the interview) can evoke
tiredness or even irritability. It is important that clinicians
understand their own responses and use them in the
diagnostic process.
While psycho-education is considered a crucial element in
the initial treatment of dissociative disorders (Van der Hart
et al., 2006; Howell, 2011; Steele et al., 2016), patients whose
diagnosis has not been confirmed by a thorough diagnostic
assessment should not be encouraged to develop knowledge
about DID symptomatology, because this may affect their clinical
presentation and how they make meaning of their problems.
Subsequently, this may lead to a wrong diagnosis and treatment,
which can become iatrogenic.
they are by nature limited to a small number of participants.
There were two important limitations in this research. Firstly,
information about the level of psychoform symptoms has not
been given, because the validation of the Polish instrument
used for that purpose is not complete. Secondly, TADS-I used
for collecting clinical data about trauma-related symptoms and
dissociation has not been validated, either. Because there are no
gold standards in Poland for diagnosing dissociative disorders,
video-recordings of diagnostic interviews were carefully analyzed
and discussed by all authors to agree upon the diagnosis. Taking
this into consideration, further qualitative and quantitative
research is recommended to formulate and validate more
specific diagnostic criteria for DID and guidelines for the
differential diagnosis.
CONCLUSION
Clinicians need to understand the complexity of DID
symptoms and psychological mechanisms responsible
for them in order to differentiate between genuine and
imitated post-traumatic conditions. There are several features
identified in this study which may indicate false-positive or
imitated DID shown in Table 4, which should be taken into
consideration during diagnostic assessment. In Poland, as
in many countries, this requires more systematic training
in diagnosis for psychiatrists and clinical psychologists in
order to prevent under- and over-diagnosis of dissociative
disorders, DID in particular. It is not uncommon that
patients exaggerate on self-report questionnaires when
they are invested in certain symptoms. In this study, all
participants had scores above the cut-off score of 28 on
the SDQ-20, a measure to assess somatoform dissociation,
Frontiers in Psychology | www.frontiersin.org
DATA AVAILABILITY STATEMENT
The datasets generated for this study are not readily available
because data contain highly sensitive clinical material,
including medical data which cannot be shared according
to local regulations. Requests to access the datasets should
be directed to IP, [email protected].
11
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
interviews and helped in literature review and manuscript
preparation. RT performed psychiatric assessment and
helped in data analysis and manuscript preparation. SB
helped in data analysis and manuscript preparation. All
authors contributed to the article and approved the
submitted version.
ETHICS STATEMENT
The studies involving human participants were reviewed
and approved by Ethical Review Board at the SWPS
University of Social Sciences and Humanities. The
patients/participants provided their written informed consent
to participate in this study.
FUNDING
AUTHOR CONTRIBUTIONS
Grant number 2016/22/E/HS6/00306 was obtained for the study
“Interpretative phenomenological analysis of depersonalization
and derealization in clinical and non-clinical groups.”
IP collected qualitative data, performed the analysis, and
prepared the manuscript. AB-N transcribed and analyzed the
REFERENCES
Leonard, D., Brann, S., and Tiller, J. (2005). Dissociative disorders: pathways to
diagnosis, clinician attitudes and their impact. Aust. N. Z, J. Psychiatry 39,
940–946. doi: 10.1080/j.1440-1614.2005.01700.x
Longden, E., Moskowitz, A., Dorahy, M. J., and Perona-Garcelán, S. (2019).
Auditory Verbal Hallucinations: Prevalence, Phenomenology, and the
Dissociation Hypothesis Psychosis, Trauma and Dissociation: Evolving
Perspectives on Severe Psychopathology. (Hoboken, NJ: John Wiley & Sons
Ltd.), 207–222.
Nijenhuis, E., van der Hart, O., and Kruger, K. (2002). The psychometric
characteristics of the traumatic experiences checklist (TEC): first findings
among psychiatric outpatients. Clin. Psychol. Psychother. 9, 200–210. doi: 10.
1002/cpp.332
Pietkiewicz, I. J., Hełka, A., and Tomalski, R. (2018). Validity and reliability of
the Polish online and pen-and-paper versions of the somatoform dissociation
questionnaires (SDQ-20 and PSDQ-5). Eur. J. Trauma Dissociation 3, 23–31.
doi: 10.1016/j.ejtd.2018.05.002
Pietkiewicz, I. J., and Smith, J. A. (2014). A practical guide to using interpretative
phenomenological analysis in qualitative research psychology. Psychol. J. 20,
7–14. doi: 10.14691/CPPJ.20.1.7
Putnam, F. W., Guroff, J. J., Silberman, E. K., Barban, L., and Post, R. M. (1986). The
clinical phenomenology of multiple personality disorder: review of 100 recent
cases. J. Clin. Psychiatry 47, 285–293.
Ross, C. A., Norton, G. R., and Wozney, K. (1989). Multiple personality disorder:
an analysis of 236 cases. Can. J. Psychiatry 34, 413–418. doi: 10.1177/
070674378903400509
Sar, V. (2011). Epidemiology of dissociative disorders: an overview. Epidemiol. Res.
Int. 2011, 404538. doi: 10.1155/2011/404538
Simeon, D., and Abugel, J. (2006). Feeling Unreal. Depersonalization
Disorder and the Loss of the Self. New York, NY: Oxford University
Press.
Smith, J. A., and Osborn, M. (2008). “Interpretative phenomenological analysis,”
in Qualitative Psychology: A Practical Guide to Research Methods, ed. J. Smith
(London: Sage), 53–80.
Steele, K., Boon, S., and Van der Hart, O. (2016). Treating Trauma-Related
Dissociation. A Practical, Integrative Approach. New York, NY: W. W. Norton &
Company.
Steele, K., Van Der Hart, O., and Nijenhuis, E. R. (2005). Phase-oriented treatment
of structural dissociation in complex traumatization: overcoming traumarelated phobias. J. Trauma Dissociation 6, 11–53.
Thomas, A. (2001). Factitious and malingered dissociative identity disorder:
clinical features observed in 18 cases. J. Trauma Dissociation 2, 59–77. doi:
10.1300/J229v02n04_04
Van der Hart, O., Nijenhuis, E., and Steele, K. (2006). The Haunted Self: Structural
Dissociation and the Treatment of Chronic Traumatization. London: W.W.
Norton & Co.
Van der Hart, O., Nijenhuis, E. R., and Solomon, R. (2010). Dissociation of
the personality in complex trauma-related disorders and EMDR: theoretical
considerations. J. EMDR Pract. Res. 4, 76–92. doi: 10.1891/1933-3196.
4.2.76
Allen, J. G., Console, D. A., and Lewis, L. (1999). Dissociative detachment
and memory impairment: reversible amnesia or encoding failure? Compre.
Psychiatry 40, 160–171. doi: 10.1016/S0010-440X(99)90121-9
American Psychiatric Association (2013). Diagnostic and Statistical Manual of
Mental Disorders (DSM-5), Fifth Edn. Arlington, VA: American Psychiatric
Publishing.
Boon, S., and Draijer, N. (1993). The differentiation of patients with MPD or
DDNOS from patients with a cluster B personality disorder. Dissociation 6,
126–135.
Boon, S., and Matthess, H. (2017). Trauma and Dissociation Symptoms Interview
(TADS-I), version 1.9.
Boon, S. A., and Draijer, P. J. (1995). Screening en Diagnostiek van Dissociatieve
Stoornissen. Lisse: Swets & Zeitlinger.
Boysen, G. A., and VanBergen, A. (2014). Simulation of multiple personalities:
a review of research comparing diagnosed and simulated dissociative identity
disorder. Clin. Psychol. Rev. 34, 14–28. doi: 10.1016/j.cpr.2013.10.008
Brand, B. L., Webermann, A. R., and Frankel, A. S. (2016). Assessment of complex
dissociative disorder patients and simulated dissociation in forensic contexts.
Int. J. Law Psychiatry 49, 197–204. doi: 10.1016/j.ijlp.2016.10.006
Coons, P. M., and Milstein, V. (1994). Factitious or malingered multiple personality
disorder: eleven cases. Dissociation 7, 81–85.
Dell, P. F. (2006). A new model of dissociative identity disorder. Psychiatr. Clin. 29,
1–26. doi: 10.1016/j.psc.2005.10.013
Dorahy, M. J., Brand, B. L., Şar, V., Krüger, C., Stavropoulos, P., Martínez-Taboas,
A., et al. (2014). Dissociative identity disorder: an empirical overview. Aust.
N. Z. J. Psychiatry 48, 402–417. doi: 10.1177/0004867414527523
Dorahy, M. J., Shannon, C., Seagar, L., Corr, M., Stewart, K., Hanna, D., et al. (2009).
Auditory hallucinations in dissociative identity disorder and schizophrenia with
and without a childhood trauma history: similarities and differences. J. Nerv.
Ment. Dis. 197, 892–898. doi: 10.1097/NMD.0b013e3181c299ea
Draijer, N., and Boon, S. (1999). The imitation of dissociative identity disorder:
patients at risk, therapists at risk. J. Psychiatry Law 27, 423–458. doi: 10.1177/
009318539902700304
Friedl, M., Draijer, N., and De Jonge, P. (2000). Prevalence of dissociative disorders
in psychiatric in−patients: the impact of study characteristics. Acta Psychiatr.
Scand. 102, 423–428. doi: 10.1034/j.1600-0447.2000.102006423.x
Holmes, E. A., Brown, R. J., Mansell, W., Fearon, R. P., Hunter, E. C., Frasquilho, F.,
et al. (2005). Are there two qualitatively distinct forms of dissociation? a review
and some clinical implications. Clin. Psychol. Rev. 25, 1–23.
Howell, E. F. (2011). Understanding and Treating Dissociative Identity Disorder: A
Relational Approach. New York, NY: Routledge.
International Society for the Study of Trauma and Dissociation (2011). Guidelines
for treating dissociative identity disorder in adults, third revision. J. Trauma
Dissociation 12, 115–187. doi: 10.1080/15299732.2011.537247
Laddis, A., Dell, P. F., and Korzekwa, M. (2017). Comparing the symptoms and
mechanisms of “dissociation” in dissociative identity disorder and borderline
personality disorder. J. Trauma Dissociation 18, 139–173.
Frontiers in Psychology | www.frontiersin.org
12
May 2021 | Volume 12 | Article 637929
Pietkiewicz et al.
Revisiting False-Positive and Imitated DID
World Health Organization (1993). The ICD-10 Classification of Mental and
Behavioural Disorders: Clinical Descriptions and Diagnostic Guidelines. Geneva:
World Health Organization.
Copyright © 2021 Pietkiewicz, Bańbura-Nowak, Tomalski and Boon. This is an
open-access article distributed under the terms of the Creative Commons Attribution
License (CC BY). The use, distribution or reproduction in other forums is permitted,
provided the original author(s) and the copyright owner(s) are credited and that the
original publication in this journal is cited, in accordance with accepted academic
practice. No use, distribution or reproduction is permitted which does not comply
with these terms.
Conflict of Interest: The authors declare that the research was conducted in the
absence of any commercial or financial relationships that could be construed as a
potential conflict of interest.
Frontiers in Psychology | www.frontiersin.org
13
May 2021 | Volume 12 | Article 637929
|
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | My aunt currently lives in Germany. She said they have so many protesters and they always interfere with regular public activities. I heard there was an issue with the German courts and the protestors. Can you tell me more about the situation? Specifically what happened and how the Germany courts responded? I believe it was something along the lines of they got sued. Give me more than 400 words. | Journalists file complaint with Germany constitutional court over phone wiretappingNewsMartin Kraft, CC BY-SA 3.0, via Wikimedia Commons
Journalists file complaint with Germany constitutional court over phone wiretapping
Salma Ben Mariem | Faculty of Law and Political Science of Sousse, TN
September 12, 2024 01:42:34 pm
Three journalist organizations filed a complaint against German authorities with the country’s Federal Constitutional Court, local media reported on Wednesday. The complaint concerns investigators’ wiretapping of phone calls between journalists and climate activists. This constitutional complaint follows two previous verdicts issued by Munich’s District Court and Munich’s Regional Court.
The three associations that filed the complaint are the Bavarian Journalists Association (BJV), Reporters Without Borders (RSF) and the Society for Civil Rights (GFF). They claimed that investigators illegally listened to phone conversations between journalists and members of the group the Last Generation (Letzte Generation). They argued that this measure constituted a violation of press freedom and a threat to democracy. The Last Generation is a group of climate activists who use direct action methods such as traffic blockades and vandalism of buildings, private boats and planes to protest against and raise awareness of climate change.
According to a press release published by the GFF, the Munich prosecutor’s office surveilled the phone line provided by the Last Generation to receive journalist inquiries. The surveillance lasted for months and affected 171 journalists who were not informed by authorities of this investigative measure.
Consequently, the three journalist’s associations filed a first complaint to Munich’s District Court which ruled that the surveillance measure was lawful. The associations filed a second complaint to Munich’s Regional Court which saw the wiretapping of phone calls as “a profound interference with press freedom.” However, the court considered that the surveillance measure was “proportionate” and rejected the complaint because of an ongoing investigation at the time against seven Last Generation activists over suspicion of supporting a criminal organization. This accusation was denied by the environmental group.
The Chairman of the BJV Harald Stocker criticized both rulings and stated in a press release that before approving a wiretapping operation, judges needed to weigh up the interference with press freedom. He explained, “If judges authorize the recording of confidential conversations with journalists, they must first exhaust other options and carefully examine and justify the benefits.” He also added, regarding the Regional Court’s verdict, that it wasn’t sufficient to recognize several months later that the wiretapping operation constituted an interference with press freedom and at the same time uphold the measure as lawful.
Furthermore, the BJV Managing Director Dennis Amour described the surveillance of journalists’ phone conversations as a “disproportionate” measure that the courts shouldn’t use to circumvent the protection of reporters bound by professional secrecy.
By raising this complaint, the concerned journalists want to ensure that in the future, all courts carefully consider the impact on press freedom and provide a documented assessment of alternatives before approving any surveillance measures.
As environmental protests escalated in many European countries, a July Human Rights Watch report revealed that governments have also intensified suppressive measures to quell activists’ dissent. | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
My aunt currently lives in Germany. She said they have so many protesters and they always interfere with regular public activities. I heard there was an issue with the German courts and the protestors. Can you tell me more about the situation? Specifically what happened and how the Germany courts responded? I believe it was something along the lines of they got sued. Give me more than 400 words.
<TEXT>
Journalists file complaint with Germany constitutional court over phone wiretappingNewsMartin Kraft, CC BY-SA 3.0, via Wikimedia Commons
Journalists file complaint with Germany constitutional court over phone wiretapping
Salma Ben Mariem | Faculty of Law and Political Science of Sousse, TN
September 12, 2024 01:42:34 pm
Three journalist organizations filed a complaint against German authorities with the country’s Federal Constitutional Court, local media reported on Wednesday. The complaint concerns investigators’ wiretapping of phone calls between journalists and climate activists. This constitutional complaint follows two previous verdicts issued by Munich’s District Court and Munich’s Regional Court.
The three associations that filed the complaint are the Bavarian Journalists Association (BJV), Reporters Without Borders (RSF) and the Society for Civil Rights (GFF). They claimed that investigators illegally listened to phone conversations between journalists and members of the group the Last Generation (Letzte Generation). They argued that this measure constituted a violation of press freedom and a threat to democracy. The Last Generation is a group of climate activists who use direct action methods such as traffic blockades and vandalism of buildings, private boats and planes to protest against and raise awareness of climate change.
According to a press release published by the GFF, the Munich prosecutor’s office surveilled the phone line provided by the Last Generation to receive journalist inquiries. The surveillance lasted for months and affected 171 journalists who were not informed by authorities of this investigative measure.
Consequently, the three journalist’s associations filed a first complaint to Munich’s District Court which ruled that the surveillance measure was lawful. The associations filed a second complaint to Munich’s Regional Court which saw the wiretapping of phone calls as “a profound interference with press freedom.” However, the court considered that the surveillance measure was “proportionate” and rejected the complaint because of an ongoing investigation at the time against seven Last Generation activists over suspicion of supporting a criminal organization. This accusation was denied by the environmental group.
The Chairman of the BJV Harald Stocker criticized both rulings and stated in a press release that before approving a wiretapping operation, judges needed to weigh up the interference with press freedom. He explained, “If judges authorize the recording of confidential conversations with journalists, they must first exhaust other options and carefully examine and justify the benefits.” He also added, regarding the Regional Court’s verdict, that it wasn’t sufficient to recognize several months later that the wiretapping operation constituted an interference with press freedom and at the same time uphold the measure as lawful.
Furthermore, the BJV Managing Director Dennis Amour described the surveillance of journalists’ phone conversations as a “disproportionate” measure that the courts shouldn’t use to circumvent the protection of reporters bound by professional secrecy.
By raising this complaint, the concerned journalists want to ensure that in the future, all courts carefully consider the impact on press freedom and provide a documented assessment of alternatives before approving any surveillance measures.
As environmental protests escalated in many European countries, a July Human Rights Watch report revealed that governments have also intensified suppressive measures to quell activists’ dissent.
https://www.jurist.org/news/2024/09/journalists-file-complaint-with-germany-constitutional-court-over-phone-wiretapping/ |
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | My son wants to start playing football but I'm scared he will get a concussion. Can you help me to understand what concussions actually are? How does it affect a growing child? | According to a recent Harris Poll, football continues its long reign as America’s most popular sport. From pre-game tailgating to the actual event, where any team can win any week, to post-game celebration parties, and of course, the Super Bowl – one of the most-watched television programs in the country – what’s not to love?
Yet despite its popularity, there is increasing concern and controversy about the long-term safety of playing football, which has resulted in many parents deciding not to allow their kids to participate in the game. In fact, a recent Bloomberg Politics poll found that half of all Americans say they wouldn’t want their child to play football.
The biggest concern? Head injury. According to HealthResearchFunding.org, concussion rates for children under age 19 who play tackle football have doubled over the last decade, most occurring during practices. Concussions can occur with a blow to the head through helmet to helmet contact, and if undiagnosed and left untreated can lead to permanent brain damage. What is more, “subconcussive hits,” or repeated blows to the head that don’t result in concussions can, over a long period of time, have long term health effects, too – primarily chronic traumatic encephalopathy (CTE), a progressive degenerative disease of the brain found in athletes (and others) with a history of repetitive brain trauma.
CTE caused by repetitive blows to the head as a result of playing football was brought to light in 2002 by Bennet Omalu, a Nigerian-American physician and forensic pathologist, who discovered the condition during an autopsy of former Pittsburgh Steelers player Mike Webster. This finding, as well as his subsequent reports of CTE in other deceased NFL players, eventually led to the book and later the 2015 film, “Concussion.”
It also spurred increasing concern about whether children should be playing football at all. If you were to ask Omalu, the answer would be no. In a 2015 “The New York Times” editorial piece he wrote: “If a child who plays football is subjected to advanced radiological and neurocognitive studies during the season and several months after the season, there can be evidence of brain damage at the cellular level of brain functioning, even if there were no documented concussions or reported symptoms. If that child continues to play over many seasons, these cellular injuries accumulate to cause irreversible brain damage, which we know now by the name chronic traumatic encephalopathy…”
But many sports medicine physicians, like Kristin Ernest, MD, who practices at Texas Children’s Hospital the Woodlands, are less adamant about banning youth tackle football. “There is no conclusive evidence that shows us children shouldn’t play football,” she claimed. “But they should be taught proper technique, and they should have proper fitting equipment.”
Jonathan Minor, MD, a pediatric sports medicine specialist on the concussion management team at CHOC hospital in California, agreed that we don’t know whether there’s a cumulative effect of head injury in football beginning in childhood. But he does believe that such injuries may be prevented by introducing kids first to non-impact football, allowing them to develop skills before playing the contact game. He also advocates core strengthening exercises as a part of football practice. “There is some research showing that core and neck strengthening can help to prevent traumatic brain injury during football,” he said.
And the American Academy of Pediatrics (AAP), while endorsing efforts to limit contact practices in youth football, recently refused to support those calling for an outright ban on tackling in football for athletes below age 18, saying that it is unwilling to recommend such a fundamental change in the way the game is played.
The bottom line is that it’s up to parents to decide whether the risks of youth tackle football outweigh the recreational benefit. Some things to consider for kids who do play football:
Learn the youth concussion laws in your state. While all states now have laws designed to reduce risk of concussion and other head injuries in youth football, not all state laws are equal, according to USA Football. Fewer than half contain all of the key principles, such as limits on full contact practice, mandatory education about concussion symptoms for coaches, removal of a player from the game if a head injury is suspected, and written medical clearance for return to play. Only a handful of laws specify which ages/grades are covered and whether the law pertains to public and private schools and athletic leagues. Worse yet, almost all lack consequences for schools or leagues that don’t comply with the law. Find your state laws on USAFootball.com and make sure they are being followed.
Make sure your child is being taught “heads up” tackle technique, advised Dr. Ernest. This USA-designed program utilizes five fundamentals through a series of drills to reinforce proper tackling mechanics and teaches players to focus on reducing helmet contact. Research presented at the American Academy of Pediatrics’ recent meeting in Washington D.C. showed that limits on full-contact practice as well as teaching Heads Up tackling are working to reduce concussion risk in youth football.
Consider a baseline neurocognitive test, such as ImPACT testing, for your child prior to the football season, even if it’s not required by your child’s coach, said Dr. Minor. “A neurocognitive test is a computer-based 20-30 minute test that measures brain functions like visual and verbal memory and reaction time,” he explained. “It’s very helpful to do at the beginning of the season and then repeat if there are signs of concussion. It can help, along with medical evaluation, to make a diagnosis.”
Make sure your child is fitted properly with protective equipment and wears it all the time.
Know the symptoms of concussion – and make sure your child knows them, too. He should know that it’s important to report any of these symptoms to the coach and to you and that he should be removed from the practice or game immediately. A child who is suspected to have a concussion should see a licensed health professional as soon as possible.
Follow the doctor’s orders exactly if your child has a concussion, said Dr. Ernest. Failure to follow orders for physical and cognitive rest can prolong recovery. And know that those who have a first concussion are 3-6 times more likely to suffer another concussion than an athlete who has not had a concussion. So returning to play before the brain has fully healed can put your athlete at higher risk for complications if a second concussion is sustained.
WHAT YOU SHOULD KNOW ABOUT CONCUSSIONS
WHAT IT IS: According to sports medicine physician Kristin Ernest, MD, a concussion is any force transmitted to the brain itself. While it generally occurs with a blow to the head, any injury that causes the brain to shift rapidly within the skull (such as a whiplash injury) can cause a concussion.
SYMPTOMS OF CONCUSSION: While symptoms can vary from child to child, said Dr. Ernest, some of the most common are headache, nausea, dizziness, irritability or crying, trouble concentrating, and sleep disturbance (sleeping more or less than usual).
HOW IT’S DIAGNOSED: A concussion will not show up on imaging tests, such as MRI, said Dr. Ernest. It is diagnosed through medical evaluation of symptoms and often, neurocognitive testing.
HOW CONCUSSIONS ARE TREATED: Concussions are treated with both physical and cognitive rest, according to Dr. Ernest. This includes rest from all athletic activities, as well as from homework and screen time. In general, recovery takes about 7-10 days, longer for younger children and for those with a history of migraine or ADHD. MS&F | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
My son wants to start playing football but I'm scared he will get a concussion. Can you help me to understand what concussions actually are? How does it affect a growing child?
<TEXT>
According to a recent Harris Poll, football continues its long reign as America’s most popular sport. From pre-game tailgating to the actual event, where any team can win any week, to post-game celebration parties, and of course, the Super Bowl – one of the most-watched television programs in the country – what’s not to love?
Yet despite its popularity, there is increasing concern and controversy about the long-term safety of playing football, which has resulted in many parents deciding not to allow their kids to participate in the game. In fact, a recent Bloomberg Politics poll found that half of all Americans say they wouldn’t want their child to play football.
The biggest concern? Head injury. According to HealthResearchFunding.org, concussion rates for children under age 19 who play tackle football have doubled over the last decade, most occurring during practices. Concussions can occur with a blow to the head through helmet to helmet contact, and if undiagnosed and left untreated can lead to permanent brain damage. What is more, “subconcussive hits,” or repeated blows to the head that don’t result in concussions can, over a long period of time, have long term health effects, too – primarily chronic traumatic encephalopathy (CTE), a progressive degenerative disease of the brain found in athletes (and others) with a history of repetitive brain trauma.
CTE caused by repetitive blows to the head as a result of playing football was brought to light in 2002 by Bennet Omalu, a Nigerian-American physician and forensic pathologist, who discovered the condition during an autopsy of former Pittsburgh Steelers player Mike Webster. This finding, as well as his subsequent reports of CTE in other deceased NFL players, eventually led to the book and later the 2015 film, “Concussion.”
It also spurred increasing concern about whether children should be playing football at all. If you were to ask Omalu, the answer would be no. In a 2015 “The New York Times” editorial piece he wrote: “If a child who plays football is subjected to advanced radiological and neurocognitive studies during the season and several months after the season, there can be evidence of brain damage at the cellular level of brain functioning, even if there were no documented concussions or reported symptoms. If that child continues to play over many seasons, these cellular injuries accumulate to cause irreversible brain damage, which we know now by the name chronic traumatic encephalopathy…”
But many sports medicine physicians, like Kristin Ernest, MD, who practices at Texas Children’s Hospital the Woodlands, are less adamant about banning youth tackle football. “There is no conclusive evidence that shows us children shouldn’t play football,” she claimed. “But they should be taught proper technique, and they should have proper fitting equipment.”
Jonathan Minor, MD, a pediatric sports medicine specialist on the concussion management team at CHOC hospital in California, agreed that we don’t know whether there’s a cumulative effect of head injury in football beginning in childhood. But he does believe that such injuries may be prevented by introducing kids first to non-impact football, allowing them to develop skills before playing the contact game. He also advocates core strengthening exercises as a part of football practice. “There is some research showing that core and neck strengthening can help to prevent traumatic brain injury during football,” he said.
And the American Academy of Pediatrics (AAP), while endorsing efforts to limit contact practices in youth football, recently refused to support those calling for an outright ban on tackling in football for athletes below age 18, saying that it is unwilling to recommend such a fundamental change in the way the game is played.
The bottom line is that it’s up to parents to decide whether the risks of youth tackle football outweigh the recreational benefit. Some things to consider for kids who do play football:
Learn the youth concussion laws in your state. While all states now have laws designed to reduce risk of concussion and other head injuries in youth football, not all state laws are equal, according to USA Football. Fewer than half contain all of the key principles, such as limits on full contact practice, mandatory education about concussion symptoms for coaches, removal of a player from the game if a head injury is suspected, and written medical clearance for return to play. Only a handful of laws specify which ages/grades are covered and whether the law pertains to public and private schools and athletic leagues. Worse yet, almost all lack consequences for schools or leagues that don’t comply with the law. Find your state laws on USAFootball.com and make sure they are being followed.
Make sure your child is being taught “heads up” tackle technique, advised Dr. Ernest. This USA-designed program utilizes five fundamentals through a series of drills to reinforce proper tackling mechanics and teaches players to focus on reducing helmet contact. Research presented at the American Academy of Pediatrics’ recent meeting in Washington D.C. showed that limits on full-contact practice as well as teaching Heads Up tackling are working to reduce concussion risk in youth football.
Consider a baseline neurocognitive test, such as ImPACT testing, for your child prior to the football season, even if it’s not required by your child’s coach, said Dr. Minor. “A neurocognitive test is a computer-based 20-30 minute test that measures brain functions like visual and verbal memory and reaction time,” he explained. “It’s very helpful to do at the beginning of the season and then repeat if there are signs of concussion. It can help, along with medical evaluation, to make a diagnosis.”
Make sure your child is fitted properly with protective equipment and wears it all the time.
Know the symptoms of concussion – and make sure your child knows them, too. He should know that it’s important to report any of these symptoms to the coach and to you and that he should be removed from the practice or game immediately. A child who is suspected to have a concussion should see a licensed health professional as soon as possible.
Follow the doctor’s orders exactly if your child has a concussion, said Dr. Ernest. Failure to follow orders for physical and cognitive rest can prolong recovery. And know that those who have a first concussion are 3-6 times more likely to suffer another concussion than an athlete who has not had a concussion. So returning to play before the brain has fully healed can put your athlete at higher risk for complications if a second concussion is sustained.
WHAT YOU SHOULD KNOW ABOUT CONCUSSIONS
WHAT IT IS: According to sports medicine physician Kristin Ernest, MD, a concussion is any force transmitted to the brain itself. While it generally occurs with a blow to the head, any injury that causes the brain to shift rapidly within the skull (such as a whiplash injury) can cause a concussion.
SYMPTOMS OF CONCUSSION: While symptoms can vary from child to child, said Dr. Ernest, some of the most common are headache, nausea, dizziness, irritability or crying, trouble concentrating, and sleep disturbance (sleeping more or less than usual).
HOW IT’S DIAGNOSED: A concussion will not show up on imaging tests, such as MRI, said Dr. Ernest. It is diagnosed through medical evaluation of symptoms and often, neurocognitive testing.
HOW CONCUSSIONS ARE TREATED: Concussions are treated with both physical and cognitive rest, according to Dr. Ernest. This includes rest from all athletic activities, as well as from homework and screen time. In general, recovery takes about 7-10 days, longer for younger children and for those with a history of migraine or ADHD. MS&F
https://choc.org/news/football-concussion-worth-risk/ |
Fulfill user requests utilizing only the information provided in the prompt. If you cannot answer using the context alone, state that you can't determine the answer due to a lack of context information. | What are the important things to know about partial trisomy 18? | Trisomy 18 Description Trisomy 18, also called Edwards syndrome, is a chromosomal condition associated with abnormalities in many parts of the body. Individuals with trisomy 18 often have slow growth before birth (intrauterine growth retardation) and a low birth weight. Affected individuals may have heart defects and abnormalities of other organs that develop before birth. Other features of trisomy 18 include a small, abnormally shaped head; a small jaw and mouth; and clenched fists with overlapping fingers. Due to the presence of several life-threatening medical problems, many individuals with trisomy 18 die before birth or within their first month. Five to 10 percent of children with this condition live past their first year, and these children often have severe intellectual disability. Frequency Trisomy 18 occurs in about 1 in 5,000 live-born infants; it is more common in pregnancy, but many affected fetuses do not survive to term. Although women of all ages can have a child with trisomy 18, the chance of having a child with this condition increases as a woman gets older. Causes Most cases of trisomy 18 result from having three copies of chromosome 18 in each cell in the body instead of the usual two copies. The extra genetic material disrupts the normal course of development, causing the characteristic features of trisomy 18. Approximately 5 percent of people with trisomy 18 have an extra copy of chromosome 18 in only some of the body's cells. In these people, the condition is called mosaic trisomy 18. The severity of mosaic trisomy 18 depends on the type and number of cells that have the extra chromosome. The development of individuals with this form of trisomy 18 may range from normal to severely affected. Very rarely, part of the long (q) arm of chromosome 18 becomes attached (translocated) to another chromosome during the formation of reproductive cells (eggs and sperm) or very early in embryonic development. Affected individuals have two copies of chromosome 18, plus the extra material from chromosome 18 attached to another chromosome. People with this genetic change are said to have partial trisomy 18. If only part of the q arm is present in three copies, the physical signs of partial trisomy 18 may be less severe than those typically seen in trisomy 18. If the entire q arm is present in three copies, individuals may be as severely affected as if they had three full copies of chromosome 18. Learn more about the chromosome associated with Trisomy 18 • chromosome 18 Inheritance Most cases of trisomy 18 are not inherited, but occur as random events during the formation of eggs and sperm. An error in cell division called nondisjunction results in a reproductive cell with an abnormal number of chromosomes. For example, an egg or sperm cell may gain an extra copy of chromosome 18. If one of these atypical reproductive cells contributes to the genetic makeup of a child, the child will have an extra chromosome 18 in each of the body's cells. Mosaic trisomy 18 is also not inherited. It occurs as a random event during cell division early in embryonic development. As a result, some of the body's cells have the usual two copies of chromosome 18, and other cells have three copies of this chromosome. Partial trisomy 18 can be inherited. An unaffected person can carry a rearrangement of genetic material between chromosome 18 and another chromosome. This rearrangement is called a balanced translocation because there is no extra material from chromosome 18. Although they do not have signs of trisomy 18, people who carry this type of balanced translocation are at an increased risk of having children with the condition. Other Names for This Condition • Complete trisomy 18 syndrome • Edwards syndrome • Trisomy 18 syndrome • Trisomy E syndrome Additional Information & Resources Genetic Testing Information • Genetic Testing Registry: Complete trisomy 18 (https://www.ncbi.nlm.nih.gov/gtr/co nditions/C0152096/) Genetic and Rare Diseases Information Center • Trisomy 18 (https://rarediseases.info.nih.gov/diseases/6321/index) Patient Support and Advocacy Resources • National Organization for Rare Disorders (NORD) (https://rarediseases.org/) Clinical Trials • ClinicalTrials.gov (https://clinicaltrials.gov/search?cond=%22Trisomy 18%22) Scientific Articles on PubMed • PubMed (https://pubmed.ncbi.nlm.nih.gov/?term=%28%28trisomy+18%5BTIAB%5 D%29+OR+%28Edwards+syndrome%5BTIAB%5D%29%29+AND+english%5Bla% 5D+AND+human%5Bmh%5D+AND+%22last+360+days%22%5Bdp%5D) | System Instructions: Fulfill user requests utilizing only the information provided in the prompt. If you cannot answer using the context alone, state that you can't determine the answer due to a lack of context information.
Question: What are the important things to know about partial trisomy 18?
Context: Trisomy 18 Description Trisomy 18, also called Edwards syndrome, is a chromosomal condition associated with abnormalities in many parts of the body. Individuals with trisomy 18 often have slow growth before birth (intrauterine growth retardation) and a low birth weight. Affected individuals may have heart defects and abnormalities of other organs that develop before birth. Other features of trisomy 18 include a small, abnormally shaped head; a small jaw and mouth; and clenched fists with overlapping fingers. Due to the presence of several life-threatening medical problems, many individuals with trisomy 18 die before birth or within their first month. Five to 10 percent of children with this condition live past their first year, and these children often have severe intellectual disability. Frequency Trisomy 18 occurs in about 1 in 5,000 live-born infants; it is more common in pregnancy, but many affected fetuses do not survive to term. Although women of all ages can have a child with trisomy 18, the chance of having a child with this condition increases as a woman gets older. Causes Most cases of trisomy 18 result from having three copies of chromosome 18 in each cell in the body instead of the usual two copies. The extra genetic material disrupts the normal course of development, causing the characteristic features of trisomy 18. Approximately 5 percent of people with trisomy 18 have an extra copy of chromosome 18 in only some of the body's cells. In these people, the condition is called mosaic trisomy 18. The severity of mosaic trisomy 18 depends on the type and number of cells that have the extra chromosome. The development of individuals with this form of trisomy 18 may range from normal to severely affected. Very rarely, part of the long (q) arm of chromosome 18 becomes attached (translocated) to another chromosome during the formation of reproductive cells (eggs and sperm) or very early in embryonic development. Affected individuals have two copies of chromosome 18, plus the extra material from chromosome 18 attached to another chromosome. People with this genetic change are said to have partial trisomy 18. If only part of the q arm is present in three copies, the physical signs of partial trisomy 18 may be less severe than those typically seen in trisomy 18. If the entire q arm is present in three copies, individuals may be as severely affected as if they had three full copies of chromosome 18. Learn more about the chromosome associated with Trisomy 18 • chromosome 18 Inheritance Most cases of trisomy 18 are not inherited, but occur as random events during the formation of eggs and sperm. An error in cell division called nondisjunction results in a reproductive cell with an abnormal number of chromosomes. For example, an egg or sperm cell may gain an extra copy of chromosome 18. If one of these atypical reproductive cells contributes to the genetic makeup of a child, the child will have an extra chromosome 18 in each of the body's cells. Mosaic trisomy 18 is also not inherited. It occurs as a random event during cell division early in embryonic development. As a result, some of the body's cells have the usual two copies of chromosome 18, and other cells have three copies of this chromosome. Partial trisomy 18 can be inherited. An unaffected person can carry a rearrangement of genetic material between chromosome 18 and another chromosome. This rearrangement is called a balanced translocation because there is no extra material from chromosome 18. Although they do not have signs of trisomy 18, people who carry this type of balanced translocation are at an increased risk of having children with the condition. Other Names for This Condition • Complete trisomy 18 syndrome • Edwards syndrome • Trisomy 18 syndrome • Trisomy E syndrome Additional Information & Resources Genetic Testing Information • Genetic Testing Registry: Complete trisomy 18 (https://www.ncbi.nlm.nih.gov/gtr/co nditions/C0152096/) Genetic and Rare Diseases Information Center • Trisomy 18 (https://rarediseases.info.nih.gov/diseases/6321/index) Patient Support and Advocacy Resources • National Organization for Rare Disorders (NORD) (https://rarediseases.org/) Clinical Trials • ClinicalTrials.gov (https://clinicaltrials.gov/search?cond=%22Trisomy 18%22) Scientific Articles on PubMed • PubMed (https://pubmed.ncbi.nlm.nih.gov/?term=%28%28trisomy+18%5BTIAB%5 D%29+OR+%28Edwards+syndrome%5BTIAB%5D%29%29+AND+english%5Bla% 5D+AND+human%5Bmh%5D+AND+%22last+360+days%22%5Bdp%5D) |
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | Can you summarize what happened in the Qualitex Co. v. Jacobson Products Co. decision? What act supported the right to include color to qualify as a trademark? Explain in a minimum of 200 words how the act's language supported Qualitex Co.'s opinion. | The case before us grows out of petitioner Qualitex Company's use (since the 1950's) of a special shade of green-gold color on the pads that it makes and sells to dry cleaning firms for use on dry cleaning presses. In 1989, respondent J acobson Products (a Qualitex rival) began to sell its own press pads to dry cleaning firms; and it colored those pads a similar green gold. In 1991, Qualitex registered the special greengold color on press pads with the Patent and Trademark Office as a trademark. Registration No. 1,633,711 (Feb. 5, 1991). Qualitex subsequently added a trademark infringement count, 15 U. s. C. § 1114(1), to an unfair competition claim, § 1125(a), in a lawsuit it had already filed challenging Jacobson's use of the green-gold color.
Qualitex won the lawsuit in the District Court. 21 U. S. P. Q. 2d 1457 (CD Cal. 1991). But, the Court of Appeals for the Ninth Circuit set aside the judgment in Qualitex's favor on the trademark infringement claim because, in that Circuit's view, the Lanham Act does not permit Qualitex, or anyone else, to register "color alone" as a trademark. 13 F.3d 1297, 1300, 1302 (1994).
The Courts of Appeals have differed as to whether or not the law recognizes the use of color alone as a trademark. Compare NutraSweet Co. v. Stadt Corp., 917 F.2d 1024, 1028 (CA7 1990) (absolute prohibition against protection of color alone), with In re Owens-Corning Fiberglas Corp., 774 F.2d 1116, 1128 (CA Fed. 1985) (allowing registration of color pink for fiberglass insulation), and Master Distributors, Inc. v. Pako Corp., 986 F.2d 219, 224 (CA8 1993) (declining to establish per se prohibition against protecting color alone as a trademark). Therefore, this Court granted certiorari. 512
162
u. S. 1287 (1994). We now hold that there is no rule absolutely barring the use of color alone, and we reverse the judgment of the Ninth Circuit.
II
The Lanham Act gives a seller or producer the exclusive right to "register" a trademark, 15 U. S. C. § 1052 (1988 ed. and Supp. V), and to prevent his or her competitors from using that trademark, § 1114(1). Both the language of the Act and the basic underlying principles of trademark law would seem to include color within the universe of things that can qualify as a trademark. The language of the Lanham Act describes that universe in the broadest of terms. It says that trademarks "includ[e] any word, name, symbol, or device, or any combination thereof." § 1127. Since human beings might use as a "symbol" or "device" almost anything at all that is capable of carrying meaning, this language, read literally, is not restrictive. The courts and the Patent and Trademark Office have authorized for use as a mark a particular shape (of a Coca-Cola bottle), a particular sound (of NBC's three chimes), and even a particular scent (of plumeria blossoms on sewing thread). See, e. g., Registration No. 696,147 (Apr. 12, 1960); Registration Nos. 523,616 (Apr. 4, 1950) and 916,522 (July 13, 1971); In re Clarke, 17 U. S. P. Q. 2d 1238, 1240 (TTAB 1990). If a shape, a sound, and a fragrance can act as symbols why, one might ask, can a color not do the same?
A color is also capable of satisfying the more important part of the statutory definition of a trademark, which requires that a person "us[e]" or "inten[d] to use" the mark
"to identify and distinguish his or her goods, including a unique product, from those manufactured or sold by others and to indicate the source of the goods, even if that source is unknown." 15 U. S. C. § 1127.
True, a product's color is unlike "fanciful," "arbitrary," or "suggestive" words or designs, which almost automatically
tell a customer that they refer to a brand. Abercrombie & Fitch Co. v. Hunting World, Inc., 537 F.2d 4, 9-10 (CA2 1976) (Friendly, J.); see Two Pesos, Inc. v. Taco Cabana, Inc., 505 U. S. 763, 768 (1992). The imaginary word "Suntost," or the words "Suntost Marmalade," on a jar of orange jam immediately would signal a brand or a product "source"; the jam's orange color does not do so. But, over time, customers may come to treat a particular color on a product or its packaging (say, a color that in context seems unusual, such as pink on a firm's insulating material or red on the head of a large industrial bolt) as signifying a brand. And, if so, that color would have come to identify and distinguish the goods-i. e., "to indicate" their "source"-much in the way that descriptive words on a product (say, "Trim" on nail clippers or "Car- Freshner" on deodorizer) can come to indicate a product's origin. See, e. g., J. Wiss & Sons Co. v. W E. Bassett Co., 59 C. C. P. A. 1269, 1271 (Pat.), 462 F.2d 567, 569 (1972); Car-Freshner Corp. v. Turtle Wax, Inc., 268 F. Supp. 162, 164 (SDNY 1967). In this circumstance, trademark law says that the word (e. g., "Trim"), although not inherently distinctive, has developed "secondary meaning." See Inwood Laboratories, Inc. v. Ives Laboratories, Inc., 456 U. S. 844, 851, n. 11 (1982) ("[S]econdary meaning" is acquired when "in the minds of the public, the primary significance of a product feature ... is to identify the source of the product rather than the product itself"). Again, one might ask, if trademark law permits a descriptive word with secondary meaning to act as a mark, why would it not permit a color, under similar circumstances, to do the same?
We cannot find in the basic objectives of trademark law any obvious theoretical objection to the use of color alone as a trademark, where that color has attained "secondary meaning" and therefore identifies and distinguishes a particular brand (and thus indicates its "source"). In principle, trademark law, by preventing others from copying a sourceidentifying mark, "reduce[s] the customer's costs of shopping and making purchasing decisions," 1 J. McCarthy, McCarthy on Trademarks and Unfair Competition § 2.01[2], p. 2-3 (3d ed. 1994) (hereinafter McCarthy), for it quickly and easily assures a potential customer that this item-the item with this mark-is made by the same producer as other similarly marked items that he or she liked (or disliked) in the past. At the same time, the law helps assure a producer that it (and not an imitating competitor) will reap the financial, reputation-related rewards associated with a desirable product. The law thereby "encourage[s] the production of quality products," ibid., and simultaneously discourages those who hope to sell inferior products by capitalizing on a consumer's inability quickly to evaluate the quality of an item offered for sale. See, e. g., 3 L. Altman, Callmann on Unfair Competition, Trademarks and Monopolies § 17.03 (4th ed. 1983); Landes & Posner, The Economics of Trademark Law, 78 T. M. Rep. 267, 271-272 (1988); Park 'N Fly, Inc. v. Dollar Park & Fly, Inc., 469 U. S. 189, 198 (1985); S. Rep. No. 100515, p. 4 (1988). It is the source-distinguishing ability of a mark-not its ontological status as color, shape, fragrance, word, or sign-that permits it to serve these basic purposes. See Landes & Posner, Trademark Law: An Economic Perspective, 30 J. Law & Econ. 265, 290 (1987). And, for that reason, it is difficult to find, in basic trademark objectives, a reason to disqualify absolutely the use of a color as a mark. | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
Can you summarize what happened in the Qualitex Co. v. Jacobson Products Co. decision? What act supported the right to include color to qualify as a trademark? Explain in a minimum of 200 words how the act's language supported Qualitex Co.'s opinion.
The case before us grows out of petitioner Qualitex Company's use (since the 1950's) of a special shade of green-gold color on the pads that it makes and sells to dry cleaning firms for use on dry cleaning presses. In 1989, respondent J acobson Products (a Qualitex rival) began to sell its own press pads to dry cleaning firms; and it colored those pads a similar green gold. In 1991, Qualitex registered the special greengold color on press pads with the Patent and Trademark Office as a trademark. Registration No. 1,633,711 (Feb. 5, 1991). Qualitex subsequently added a trademark infringement count, 15 U. s. C. § 1114(1), to an unfair competition claim, § 1125(a), in a lawsuit it had already filed challenging Jacobson's use of the green-gold color.
Qualitex won the lawsuit in the District Court. 21 U. S. P. Q. 2d 1457 (CD Cal. 1991). But, the Court of Appeals for the Ninth Circuit set aside the judgment in Qualitex's favor on the trademark infringement claim because, in that Circuit's view, the Lanham Act does not permit Qualitex, or anyone else, to register "color alone" as a trademark. 13 F.3d 1297, 1300, 1302 (1994).
The Courts of Appeals have differed as to whether or not the law recognizes the use of color alone as a trademark. Compare NutraSweet Co. v. Stadt Corp., 917 F.2d 1024, 1028 (CA7 1990) (absolute prohibition against protection of color alone), with In re Owens-Corning Fiberglas Corp., 774 F.2d 1116, 1128 (CA Fed. 1985) (allowing registration of color pink for fiberglass insulation), and Master Distributors, Inc. v. Pako Corp., 986 F.2d 219, 224 (CA8 1993) (declining to establish per se prohibition against protecting color alone as a trademark). Therefore, this Court granted certiorari. 512
162
u. S. 1287 (1994). We now hold that there is no rule absolutely barring the use of color alone, and we reverse the judgment of the Ninth Circuit.
II
The Lanham Act gives a seller or producer the exclusive right to "register" a trademark, 15 U. S. C. § 1052 (1988 ed. and Supp. V), and to prevent his or her competitors from using that trademark, § 1114(1). Both the language of the Act and the basic underlying principles of trademark law would seem to include color within the universe of things that can qualify as a trademark. The language of the Lanham Act describes that universe in the broadest of terms. It says that trademarks "includ[e] any word, name, symbol, or device, or any combination thereof." § 1127. Since human beings might use as a "symbol" or "device" almost anything at all that is capable of carrying meaning, this language, read literally, is not restrictive. The courts and the Patent and Trademark Office have authorized for use as a mark a particular shape (of a Coca-Cola bottle), a particular sound (of NBC's three chimes), and even a particular scent (of plumeria blossoms on sewing thread). See, e. g., Registration No. 696,147 (Apr. 12, 1960); Registration Nos. 523,616 (Apr. 4, 1950) and 916,522 (July 13, 1971); In re Clarke, 17 U. S. P. Q. 2d 1238, 1240 (TTAB 1990). If a shape, a sound, and a fragrance can act as symbols why, one might ask, can a color not do the same?
A color is also capable of satisfying the more important part of the statutory definition of a trademark, which requires that a person "us[e]" or "inten[d] to use" the mark
"to identify and distinguish his or her goods, including a unique product, from those manufactured or sold by others and to indicate the source of the goods, even if that source is unknown." 15 U. S. C. § 1127.
True, a product's color is unlike "fanciful," "arbitrary," or "suggestive" words or designs, which almost automatically
tell a customer that they refer to a brand. Abercrombie & Fitch Co. v. Hunting World, Inc., 537 F.2d 4, 9-10 (CA2 1976) (Friendly, J.); see Two Pesos, Inc. v. Taco Cabana, Inc., 505 U. S. 763, 768 (1992). The imaginary word "Suntost," or the words "Suntost Marmalade," on a jar of orange jam immediately would signal a brand or a product "source"; the jam's orange color does not do so. But, over time, customers may come to treat a particular color on a product or its packaging (say, a color that in context seems unusual, such as pink on a firm's insulating material or red on the head of a large industrial bolt) as signifying a brand. And, if so, that color would have come to identify and distinguish the goods-i. e., "to indicate" their "source"-much in the way that descriptive words on a product (say, "Trim" on nail clippers or "Car- Freshner" on deodorizer) can come to indicate a product's origin. See, e. g., J. Wiss & Sons Co. v. W E. Bassett Co., 59 C. C. P. A. 1269, 1271 (Pat.), 462 F.2d 567, 569 (1972); Car-Freshner Corp. v. Turtle Wax, Inc., 268 F. Supp. 162, 164 (SDNY 1967). In this circumstance, trademark law says that the word (e. g., "Trim"), although not inherently distinctive, has developed "secondary meaning." See Inwood Laboratories, Inc. v. Ives Laboratories, Inc., 456 U. S. 844, 851, n. 11 (1982) ("[S]econdary meaning" is acquired when "in the minds of the public, the primary significance of a product feature ... is to identify the source of the product rather than the product itself"). Again, one might ask, if trademark law permits a descriptive word with secondary meaning to act as a mark, why would it not permit a color, under similar circumstances, to do the same?
We cannot find in the basic objectives of trademark law any obvious theoretical objection to the use of color alone as a trademark, where that color has attained "secondary meaning" and therefore identifies and distinguishes a particular brand (and thus indicates its "source"). In principle, trademark law, by preventing others from copying a sourceidentifying mark, "reduce[s] the customer's costs of shopping and making purchasing decisions," 1 J. McCarthy, McCarthy on Trademarks and Unfair Competition § 2.01[2], p. 2-3 (3d ed. 1994) (hereinafter McCarthy), for it quickly and easily assures a potential customer that this item-the item with this mark-is made by the same producer as other similarly marked items that he or she liked (or disliked) in the past. At the same time, the law helps assure a producer that it (and not an imitating competitor) will reap the financial, reputation-related rewards associated with a desirable product. The law thereby "encourage[s] the production of quality products," ibid., and simultaneously discourages those who hope to sell inferior products by capitalizing on a consumer's inability quickly to evaluate the quality of an item offered for sale. See, e. g., 3 L. Altman, Callmann on Unfair Competition, Trademarks and Monopolies § 17.03 (4th ed. 1983); Landes & Posner, The Economics of Trademark Law, 78 T. M. Rep. 267, 271-272 (1988); Park 'N Fly, Inc. v. Dollar Park & Fly, Inc., 469 U. S. 189, 198 (1985); S. Rep. No. 100515, p. 4 (1988). It is the source-distinguishing ability of a mark-not its ontological status as color, shape, fragrance, word, or sign-that permits it to serve these basic purposes. See Landes & Posner, Trademark Law: An Economic Perspective, 30 J. Law & Econ. 265, 290 (1987). And, for that reason, it is difficult to find, in basic trademark objectives, a reason to disqualify absolutely the use of a color as a mark.
https://supreme.justia.com/cases/federal/us/514/159/ |
In your answer, refer only to the context document. Do not employ any outside knowledge | According to the article, what percentage of your savings is best to put into venture capital? | **How I'd Invest $250,000 Cash Today**
Usually, I have between $50,000 – $100,000 in my main bank account. But at one point, I accumulated over $250,000 mainly due to a $122,000 private real estate investment windfall.
In addition to accumulating cash, I also dollar-cost averaged in the S&P 500 on the way down in 2022 and way up in 2023. I also dollar-cost averaged in Sunbelt real estate, which struggled in 2023 due to high mortgage rates. These purchases were usually in $1,000 – $5,000 increments.
After building a larger-than-normal cash balance, here's how I'd deploy it in today's market. I'm constantly updating this post as conditions change, so book mark it if interested. If you have less than $250,000, that’s fine too. I share the percentages of where I will allocate my money.
Background Info To Understand Our Investment Process
I'm 46 and my wife is 42. Our kids our 6 and 4.
We consider ourselves moderately conservative investors since we haven't had regular day job income since 2012 for me and 2015 for my wife.
We fear having to go back to work full-time, not because of work itself but because we fear losing our freedom to spend time with our young children. As a result, we are unwilling to take too much investment risk until both attend school full-time in fall 2024.
Although we don't have day jobs, we do generate passive investment income to cover most of our living expenses. This is our definition of financial independence.
We also generate online income, which we usually reinvest to generate more passive income. Therefore, our cash pile will continue to build if we don't spend or invest all the money.
Our children's educational expenses are on track after we superfunded two 529 plans when they were born. We also have life insurance and estate planning set up. The only foreseeable big ticket item coming up is a car in 2029.
Here's how we'd invest $250,000 cash in today's market. This is what we did and are doing with our own cash. This is not investment advice for you as everybody's financial goals, risk tolerance, and situation are different.
Please always do your own due diligence before making any investment. Your investment decisions are yours alone.
1) Treasury Bonds (50% Of Cash Holding)
Only about 3% of our net worth is in bonds, mostly individual muni bonds we plan to hold until maturity. Our target annual net worth growth rate is between 5% to 10% a year, depending on economic conditions. As a result, being able to earn 5% on a Treasury bond is enticing.
The 10-year yield is currently at ~4.2% and Fed Chair Jerome Powell has hinted at Fed rate cuts starting in mid-2024. Investors can get up to around 5% for a one-year Treasury bond.
Although locking in a 4% – 5% return won't make us rich, it will provide us peace of mind. We also already feel rich, so making more money won't make us feel richer. Our focus is on optimizing our freedom and time.
Below is a recent bond yield table for all the various types of bonds you can buy, by duration. Risk-free Treasury bills and CDs look attractive. If you're in the highest marginal income tax bracket, municipal bonds look good too. Notice how the Treasury bond yield curve is still inverted.
Now that we've deployed 50% of our cash in Treasury bonds, the remaining 49.9% of our cash will be invested in risk assets.
2) Stocks (15% Of Cash Holdings)
Roughly 15% of our net worth is in stocks after paying cash for a new house in 4Q2023. The range has fluctuated between 20% – 35% since I left work in 2012. Since I started working in equities in 1999, I've done my best to diversify away from stocks and into hard assets.
My career and pay were already leveraged to the stock market. And I saw so many great fortunes made and lost during my time in the industry. When I left work, I continued my preference of investing mostly in real estate.
We almost always front-loaded our stock purchases for the year through our kids' Roth IRAs, custodial accounts, SEP IRAs, and 529 plans. For over 23 years, we've always front-loaded our tax-advantaged accounts at the beginning of the year to get them out of the way.
Most of the time it works out, some of the time it doesn't, like in 2022. That's market timing for you. But we got to front-load our tax-advantaged investments again in 2023, which has worked out great. Keep on investing consistently!
In addition to maxing out all our tax-advantaged accounts, we've been regular contributors to our taxable online brokerage accounts. After all, in order to retire early, you need a much larger taxable investment portfolio to live off its income.
When it comes to stocks, it's important to invest for specific purposes. If you do, you will be much more motivated to save and invest since stocks provide no utility or joy.
Stocks Seem Fully Valued Now
Here are the 2024 Wall Street S&P 500 forecasts with an average year-end price target of about 4,850. In other words, there’s now downside at these levels for 2024 if the average prediction comes true. Although, some strategists are forecasting 5,100-5,500 for the year.
Given the situation, I'm just buying in $1,000 – $5,000 tranches after every 1% decline. The huge year-end rally in stocks has pulled forward the expected performance in 2024.
Here is a post that provides a framework for your stock allocation by bond yield. The higher risk-free bond yields go, the lower your stock allocation is recommended to be and vice versa.
If I was in my 20s and 30s, I would allocate 50% of my cash to buying stocks instead. The remaining 20% would go to online real estate as the sector rebounds, 20% to venture capital, and only 10% would go to Treasuries and education. Remember, every investment is based off an individual's personal financial situation and goals.
3) Venture Capital (15% Of Cash Holding)
I enjoy investing in private funds because they are long-term investments with no day-to-day price updates. As a result, these investments cause little stress and are easy to forget about. Private investing forces you to invest for the long run.
I've already made capital commitments to a couple venture capital funds from Kleiner Perkins, Burst Capital, and Structural Capital (venture debt). As a result, I will just keep contributing to these funds whenever there are capital calls.
Venture capital is likely going to roar back in 2024 given private company valuations took a hit since 2022. Capital often rotates toward the biggest underperforming asset classes.
Investing In Artificial Intelligence
I'm most excited about investing in artificial intelligence, one of the biggest investment opportunities over the next decade. My Kleiner Perkins funds are actively making AI investments. But these funds are invite only with $100,000+ minimums.
The Fundrise Innovation Fund, on the other hand, is open to all with a $10 minimum investment. The fund invests in AI companies such as Databricks and Canva. Both are incredible companies and I look forward to the fund getting into more promising AI deals.
20 years from now, I don't want my kids asking me why I didn't invest in AI or work in AI given I had a chance to near the beginning. By investing in funds that invest in AI, at least I'll be able to benefit if I can't get a job in AI.
Here's an hour-long discussion I have with Ben Miller, CEO of Fundrise, about AI and investing in growth companies. Roughly 35% of the Innovation Fund is invested in AI companies.
4) Real Estate (20% Of Cash Holding)
I’m bullish on real estate in 2024 as the sector plays catch-up to stocks. With mortgage rates coming down, demand is going to rebound. As a result, I’m actively buying real estate funds today.
Real estate is my favorite asset class to build wealth. It provides shelter, generates income, and is less volatile. Unlike with some stocks, real estate values just don't decline by massive amounts overnight due to some small earnings miss. Real estate accounts for about 50% of our net worth.
No matter what happens to the value of our current forever home we bought in 2020, I'm thankful it has been able to keep my family safe and loved during the pandemic. When it comes to buying a primary residence, it's lifestyle first, investment returns a distant second. | [query]
=======
According to the article, what percentage of your savings is best to put into venture capital?
----------
[instruction]
=======
In your answer, refer only to the context document. Do not employ any outside knowledge
----------
[article]
=======
**How I'd Invest $250,000 Cash Today**
Usually, I have between $50,000 – $100,000 in my main bank account. But at one point, I accumulated over $250,000 mainly due to a $122,000 private real estate investment windfall.
In addition to accumulating cash, I also dollar-cost averaged in the S&P 500 on the way down in 2022 and way up in 2023. I also dollar-cost averaged in Sunbelt real estate, which struggled in 2023 due to high mortgage rates. These purchases were usually in $1,000 – $5,000 increments.
After building a larger-than-normal cash balance, here's how I'd deploy it in today's market. I'm constantly updating this post as conditions change, so book mark it if interested. If you have less than $250,000, that’s fine too. I share the percentages of where I will allocate my money.
Background Info To Understand Our Investment Process
I'm 46 and my wife is 42. Our kids our 6 and 4.
We consider ourselves moderately conservative investors since we haven't had regular day job income since 2012 for me and 2015 for my wife.
We fear having to go back to work full-time, not because of work itself but because we fear losing our freedom to spend time with our young children. As a result, we are unwilling to take too much investment risk until both attend school full-time in fall 2024.
Although we don't have day jobs, we do generate passive investment income to cover most of our living expenses. This is our definition of financial independence.
We also generate online income, which we usually reinvest to generate more passive income. Therefore, our cash pile will continue to build if we don't spend or invest all the money.
Our children's educational expenses are on track after we superfunded two 529 plans when they were born. We also have life insurance and estate planning set up. The only foreseeable big ticket item coming up is a car in 2029.
Here's how we'd invest $250,000 cash in today's market. This is what we did and are doing with our own cash. This is not investment advice for you as everybody's financial goals, risk tolerance, and situation are different.
Please always do your own due diligence before making any investment. Your investment decisions are yours alone.
1) Treasury Bonds (50% Of Cash Holding)
Only about 3% of our net worth is in bonds, mostly individual muni bonds we plan to hold until maturity. Our target annual net worth growth rate is between 5% to 10% a year, depending on economic conditions. As a result, being able to earn 5% on a Treasury bond is enticing.
The 10-year yield is currently at ~4.2% and Fed Chair Jerome Powell has hinted at Fed rate cuts starting in mid-2024. Investors can get up to around 5% for a one-year Treasury bond.
Although locking in a 4% – 5% return won't make us rich, it will provide us peace of mind. We also already feel rich, so making more money won't make us feel richer. Our focus is on optimizing our freedom and time.
Below is a recent bond yield table for all the various types of bonds you can buy, by duration. Risk-free Treasury bills and CDs look attractive. If you're in the highest marginal income tax bracket, municipal bonds look good too. Notice how the Treasury bond yield curve is still inverted.
Now that we've deployed 50% of our cash in Treasury bonds, the remaining 49.9% of our cash will be invested in risk assets.
2) Stocks (15% Of Cash Holdings)
Roughly 15% of our net worth is in stocks after paying cash for a new house in 4Q2023. The range has fluctuated between 20% – 35% since I left work in 2012. Since I started working in equities in 1999, I've done my best to diversify away from stocks and into hard assets.
My career and pay were already leveraged to the stock market. And I saw so many great fortunes made and lost during my time in the industry. When I left work, I continued my preference of investing mostly in real estate.
We almost always front-loaded our stock purchases for the year through our kids' Roth IRAs, custodial accounts, SEP IRAs, and 529 plans. For over 23 years, we've always front-loaded our tax-advantaged accounts at the beginning of the year to get them out of the way.
Most of the time it works out, some of the time it doesn't, like in 2022. That's market timing for you. But we got to front-load our tax-advantaged investments again in 2023, which has worked out great. Keep on investing consistently!
In addition to maxing out all our tax-advantaged accounts, we've been regular contributors to our taxable online brokerage accounts. After all, in order to retire early, you need a much larger taxable investment portfolio to live off its income.
When it comes to stocks, it's important to invest for specific purposes. If you do, you will be much more motivated to save and invest since stocks provide no utility or joy.
Stocks Seem Fully Valued Now
Here are the 2024 Wall Street S&P 500 forecasts with an average year-end price target of about 4,850. In other words, there’s now downside at these levels for 2024 if the average prediction comes true. Although, some strategists are forecasting 5,100-5,500 for the year.
Given the situation, I'm just buying in $1,000 – $5,000 tranches after every 1% decline. The huge year-end rally in stocks has pulled forward the expected performance in 2024.
Here is a post that provides a framework for your stock allocation by bond yield. The higher risk-free bond yields go, the lower your stock allocation is recommended to be and vice versa.
If I was in my 20s and 30s, I would allocate 50% of my cash to buying stocks instead. The remaining 20% would go to online real estate as the sector rebounds, 20% to venture capital, and only 10% would go to Treasuries and education. Remember, every investment is based off an individual's personal financial situation and goals.
3) Venture Capital (15% Of Cash Holding)
I enjoy investing in private funds because they are long-term investments with no day-to-day price updates. As a result, these investments cause little stress and are easy to forget about. Private investing forces you to invest for the long run.
I've already made capital commitments to a couple venture capital funds from Kleiner Perkins, Burst Capital, and Structural Capital (venture debt). As a result, I will just keep contributing to these funds whenever there are capital calls.
Venture capital is likely going to roar back in 2024 given private company valuations took a hit since 2022. Capital often rotates toward the biggest underperforming asset classes.
Investing In Artificial Intelligence
I'm most excited about investing in artificial intelligence, one of the biggest investment opportunities over the next decade. My Kleiner Perkins funds are actively making AI investments. But these funds are invite only with $100,000+ minimums.
The Fundrise Innovation Fund, on the other hand, is open to all with a $10 minimum investment. The fund invests in AI companies such as Databricks and Canva. Both are incredible companies and I look forward to the fund getting into more promising AI deals.
20 years from now, I don't want my kids asking me why I didn't invest in AI or work in AI given I had a chance to near the beginning. By investing in funds that invest in AI, at least I'll be able to benefit if I can't get a job in AI.
Here's an hour-long discussion I have with Ben Miller, CEO of Fundrise, about AI and investing in growth companies. Roughly 35% of the Innovation Fund is invested in AI companies.
4) Real Estate (20% Of Cash Holding)
I’m bullish on real estate in 2024 as the sector plays catch-up to stocks. With mortgage rates coming down, demand is going to rebound. As a result, I’m actively buying real estate funds today.
Real estate is my favorite asset class to build wealth. It provides shelter, generates income, and is less volatile. Unlike with some stocks, real estate values just don't decline by massive amounts overnight due to some small earnings miss. Real estate accounts for about 50% of our net worth.
No matter what happens to the value of our current forever home we bought in 2020, I'm thankful it has been able to keep my family safe and loved during the pandemic. When it comes to buying a primary residence, it's lifestyle first, investment returns a distant second. |
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | My house recently burned down, and I have decided to stay in a hotel until construction on my new house is completed. What rights do I have as a hotel customer? Please organize the information into a neat bullet-point list and limit your response to 150 words. | Your Rights at a Hotel or Motel in Tennessee
If you stay at or live in a hotel or motel, you should know its rights and responsibilities as well as your own rights and responsibilities.
Minimum Standards for Hotels and Motels in Tennessee
The Tennessee Department of Health, and Department of Environment and Conservation, require minimum standards for all hotels and motels in Tennessee. Those standards cover utilities, maintenance, safety, and basic health, and are available in full online here.
Make Sure that You Know Your Rights
As a guest at a hotel or motel, you should ask for a copy of all contracts or agreements that you signed for your stay. These papers may set forth the hotel's or motel's requirements during your stay as well as your rights and responsibilities as a guest in the hotel or motel.
Hotels and Motels Cannot Discriminate against You
Under federal and state laws, hotels and motels are not allowed to discriminate against you (including, refusing or denying services, or kicking you out) because of your race, creed, color, national origin, religion, sex, disability, marital status or age unless it is for a good reason.
After staying at a hotel or motel for 30 consistent days, you now have tenant rights. The Legal Aid Society in Tennessee successfully takes the position that once you've been there for 30 days, you're entitled to the same protection against eviction as a conventional tenant. That would mean the right to notice to cure or quit for a violation or unpaid rent, and to have your eviction case heard in court. You'd also have the same protections when it comes to repair issues, and retaliation.
Refusal to Provide Services to You
Hotels and motels have the right to refuse or deny services to you for the following reasons:
-You refuse to pay or are unable to pay for the services offered by the hotel or motel
-You are visibly intoxicated or disturbing the public
-The hotel or motel reasonably believes that you are seeking services for an illegal reason
-The hotel or motel reasonably believes that you are bringing dangerous items or substances onto the property
-The hotel or motel is trying to limit the number of people staying at it
-These rules must be visibly and clearly posted at the registration desk and in every room.
Removing You from Your Room and the Property
Hotels and motels have the right to remove you from your room and the property, even if you are living there, for the following reasons:
-You refuse to pay and do not pay for the accommodations or services provided to you
-You are visibly intoxicated or disturbing the public
-The hotel or motel reasonably believes that you are involved in illegal activities during your stay
-The hotel or motel reasonably believes that you are bringing dangerous items (such as guns or explosives) onto the property
-You have violated a federal, state or local law involving the hotel or motel
-You have violated a rule of the hotel or motel
Under state law, you have a right to be notified of these six rules. In addition, you have a right to be notified of the rules of the hotel or motel. All of the rules must be visibly and clearly posted at the registration desk and in every room.
Can a Hotel or Motel Sue You?
Hotels and motels may sue you for any accommodations or services that you received from it without paying for them. In addition, hotels and motels may sue you for any damages that you cause. Those damages may include property damages, lost revenue if it is unable to use the room while it is being repaired, and restitution to any person who is injured due to the damages caused by you.
What if Your Property or Luggage is Lost, Stolen or Destroyed?
Hotels and motels may allow you to keep your valuable items (such as money or jewelry) in a safe during your stay. The hotel or motel may be liable for losses, up to $800, if your valuable items are lost, stolen or destroyed. Similarly, the hotel or motel may be liable for your lost or stolen luggage, up to $1,500, if it does not provide a place to keep your luggage during your stay.
Can a Hotel or Motel Take, Sell or Destroy Your Property?
Hotels and motels may place a lien on any property that your brought to your room (such as goods, clothes, baggage or furniture) if you owe any money to the hotel or motel, allowing the hotel to keep your property until you pay. Such liens may be placed on your belongings if you have stayed for one or more nights. In addition, if you owe money to the hotel or motel, it can auction off any and all of the luggage or property that you left there to pay off your debt. The hotel or motel must advertise such auctions at least ten days beforehand. | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
My house recently burned down, and I have decided to stay in a hotel until construction on my new house is completed. What rights do I have as a hotel customer? Please organize the information into a neat bullet-point list and limit your response to 150 words.
Your Rights at a Hotel or Motel in Tennessee
If you stay at or live in a hotel or motel, you should know its rights and responsibilities as well as your own rights and responsibilities.
Minimum Standards for Hotels and Motels in Tennessee
The Tennessee Department of Health, and Department of Environment and Conservation, require minimum standards for all hotels and motels in Tennessee. Those standards cover utilities, maintenance, safety, and basic health, and are available in full online here.
Make Sure that You Know Your Rights
As a guest at a hotel or motel, you should ask for a copy of all contracts or agreements that you signed for your stay. These papers may set forth the hotel's or motel's requirements during your stay as well as your rights and responsibilities as a guest in the hotel or motel.
Hotels and Motels Cannot Discriminate against You
Under federal and state laws, hotels and motels are not allowed to discriminate against you (including, refusing or denying services, or kicking you out) because of your race, creed, color, national origin, religion, sex, disability, marital status or age unless it is for a good reason.
After staying at a hotel or motel for 30 consistent days, you now have tenant rights. The Legal Aid Society in Tennessee successfully takes the position that once you've been there for 30 days, you're entitled to the same protection against eviction as a conventional tenant. That would mean the right to notice to cure or quit for a violation or unpaid rent, and to have your eviction case heard in court. You'd also have the same protections when it comes to repair issues, and retaliation.
Refusal to Provide Services to You
Hotels and motels have the right to refuse or deny services to you for the following reasons:
-You refuse to pay or are unable to pay for the services offered by the hotel or motel
-You are visibly intoxicated or disturbing the public
-The hotel or motel reasonably believes that you are seeking services for an illegal reason
-The hotel or motel reasonably believes that you are bringing dangerous items or substances onto the property
-The hotel or motel is trying to limit the number of people staying at it
-These rules must be visibly and clearly posted at the registration desk and in every room.
Removing You from Your Room and the Property
Hotels and motels have the right to remove you from your room and the property, even if you are living there, for the following reasons:
-You refuse to pay and do not pay for the accommodations or services provided to you
-You are visibly intoxicated or disturbing the public
-The hotel or motel reasonably believes that you are involved in illegal activities during your stay
-The hotel or motel reasonably believes that you are bringing dangerous items (such as guns or explosives) onto the property
-You have violated a federal, state or local law involving the hotel or motel
-You have violated a rule of the hotel or motel
Under state law, you have a right to be notified of these six rules. In addition, you have a right to be notified of the rules of the hotel or motel. All of the rules must be visibly and clearly posted at the registration desk and in every room.
Can a Hotel or Motel Sue You?
Hotels and motels may sue you for any accommodations or services that you received from it without paying for them. In addition, hotels and motels may sue you for any damages that you cause. Those damages may include property damages, lost revenue if it is unable to use the room while it is being repaired, and restitution to any person who is injured due to the damages caused by you.
What if Your Property or Luggage is Lost, Stolen or Destroyed?
Hotels and motels may allow you to keep your valuable items (such as money or jewelry) in a safe during your stay. The hotel or motel may be liable for losses, up to $800, if your valuable items are lost, stolen or destroyed. Similarly, the hotel or motel may be liable for your lost or stolen luggage, up to $1,500, if it does not provide a place to keep your luggage during your stay.
Can a Hotel or Motel Take, Sell or Destroy Your Property?
Hotels and motels may place a lien on any property that your brought to your room (such as goods, clothes, baggage or furniture) if you owe any money to the hotel or motel, allowing the hotel to keep your property until you pay. Such liens may be placed on your belongings if you have stayed for one or more nights. In addition, if you owe money to the hotel or motel, it can auction off any and all of the luggage or property that you left there to pay off your debt. The hotel or motel must advertise such auctions at least ten days beforehand.
https://www.help4tn.org/node/1493/your-rights-hotel-or-motel-tennessee-help4tn-blog |
You must generate a response using only this provided document. Do not use any other outside source to support your claims. If you are unable to answer the request using the supporting document only, then you must respond with "please support more relevant documents so that I may answer your request accurately". | What do the ratings say that are 2 stars and below? | Top positive review
Positive reviews›
Jodi P
5.0 out of 5 stars
Is as described
Reviewed in the United States on December 14, 2023
Like the balls, good for exercising fingers. A bit small for full hand workout
3 people found this helpful
Top critical review
Critical reviews›
Bonnie Rosenstock
3.0 out of 5 stars
Not very substantial
Reviewed in the United States on November 23, 2023
Too small. So not very good workout.
2 people found this helpful
Search
SORT BY
Top reviewsMost recent
Top reviews
FILTER BY
All reviewersVerified purchase only
All reviewers
All stars5 star only4 star only3 star only2 star only1 star onlyPositive reviewsCritical reviews
All stars
Text, image, videoImage and video reviews only
Text, image, video
3,286 total ratings, 194 with reviews
From the United States
Jodi P
5.0 out of 5 stars
Is as described
Reviewed in the United States on December 14, 2023
Verified Purchase
Like the balls, good for exercising fingers. A bit small for full hand workout
3 people found this helpful
Helpful
Report
Jesse B
5.0 out of 5 stars
Great exercise for your hands
Reviewed in the United States on January 29, 2024
Verified Purchase
Have a little arthritis in both hands, and I use the balls to exercise my grip. Works great.
Helpful
Report
Ronda Sasser
4.0 out of 5 stars
Good for PT
Reviewed in the United States on September 10, 2023
Verified Purchase
Good for strength training your hands after shoulder surgery.
Helpful
Report
Marie Skinner
5.0 out of 5 stars
Just what i was looking for.
Reviewed in the United States on January 6, 2024
Verified Purchase
As a massage therapist, i use my hands a lot. I got these balls to strengthen them. The balls are
easy to use.
Helpful
Report
Bonnie Rosenstock
3.0 out of 5 stars
Not very substantial
Reviewed in the United States on November 23, 2023
Verified Purchase
Too small. So not very good workout.
2 people found this helpful
Helpful
Report
Paul Gabriel Wiener
5.0 out of 5 stars
They do what they're supposed to do
Reviewed in the United States on September 17, 2022
Verified Purchase
Set of 3 squeeze balls. Yellow is pretty soft, orange is moderately firm, and blue is kind of tough.
They've got a good texture. Just rough enough to have some grip without being irritating to hold.
They helped strengthen my arms in preparation for some IV treatment, and they're also just fun to
squeeze. They'd make good juggling practice balls, too, if you're into that.
7 people found this helpful
Helpful
Report
E. Nawrocki
5.0 out of 5 stars
A little sticky at first
Reviewed in the United States on August 30, 2023
Verified Purchase
These were a little sticky at first but got better during use. Helped with my hands that had some
ligament damage.
One person found this helpful
Helpful
Report
DianaQ
5.0 out of 5 stars
Great Squishy Balls
Reviewed in the United States on August 5, 2022
Verified Purchase
Broke my arm in three places and wound up with a big, purple, swollen hand. Surgeon suggested
this type of hand exercise to get my hand back to normal. I have poor circulation in the other hand
(goes to sleep easily) so now I do two-handed squishy ball squeezes as I watch TV in the evening.
It’s clearly benefiting both hands! Good value for the money spent. Zippered case keeps them clean.
Don’t know why anyone would need to spend more on exercise balls like these.
3 people found this helpful
Helpful
Report
Richard Lyda
4.0 out of 5 stars
Squeeze balls
Reviewed in the United States on July 25, 2023
Verified Purchase
They are squeeze balls for medical purposes
They squeeze what can I say
Helpful
Report
Prairie Gal
3.0 out of 5 stars
Just ok
Reviewed in the United States on November 2, 2023
Verified Purchase
There was no indication of the colors and resistance levels and it is very hard to feel the difference!
Ok for the money paid!
One person found this helpful
From the United States
Wesismore
2.0 out of 5 stars
Not what I wanted
Reviewed in the United States on January 31, 2024
Verified Purchase
These feel cheap. They say that there are 3 levels of resistence which is nonsense. Both I and my
mother who I bought these for, couldn't tell/feel the differences among them. Also, they say they are
2 inches across, they are not. They measure smaller and feel as such in ones hand. I am returning
for a refund.
Helpful
Report
Norine McDonald Tepas
4.0 out of 5 stars
PT
Reviewed in the United States on July 16, 2023
Verified Purchase
Suggested by my Doctor and PT
Helpful
Report
J. Smith
4.0 out of 5 stars
Different strengths are great
Reviewed in the United States on April 30, 2023
Verified Purchase
I like the idea I can have the option of the different strengths. I wish they were a little bit bigger. I
have osteoarthritis in my fingers and the stress balls really help.
2 people found this helpful
Helpful
Report
Marie
4.0 out of 5 stars
Stress Balls
Reviewed in the United States on June 28, 2023
Verified Purchase
They are Ok
Helpful
Report
Francisco
4.0 out of 5 stars
Quite good
Reviewed in the United States on May 13, 2023
Verified Purchase
Pretty happy with them. Wish they were bigger, but otherwise got what I wanted
2 people found this helpful
Helpful
Report
Angela C. Adams
5.0 out of 5 stars
soft
Reviewed in the United States on October 4, 2023
Verified Purchase
easy to use
One person found this helpful
Helpful
Report
Angela K.
4.0 out of 5 stars
Smaller than expected
Reviewed in the United States on February 21, 2023
Verified Purchase
Like the material. It’s easy to grip and not slippery. Many options for hand and finger strengthening
2 people found this helpful
Helpful
Report
Charles L.
4.0 out of 5 stars
A bit small for a woman's hand
Reviewed in the United States on February 20, 2023
Verified Purchase
A bit small to do physical therapy for an average woman's hand, but otherwise very good.
3 people found this helpful
Helpful
Report
Debora Vardeman
5.0 out of 5 stars
Our Grand dogs love them
Reviewed in the United States on March 23, 2023
Verified Purchase
We buy these for our grand dogs as they are small enough for them to grab by the mouth and bring
back to us. Due to what they are made of, the dogs can not tear them apart. We also have a niece
dog that visits and she goes nuts over them. Very well made.
Helpful
Report
Maureen
5.0 out of 5 stars
3 firmness levels…works great!
Reviewed in the United States on August 20, 2023
Verified Purchase
I used this for exercising my hand. Loved that the colors correspond to the firmness levels.
3 people found this helpful
From the United States
Sharon DeLorenzo
3.0 out of 5 stars
Very small
Reviewed in the United States on June 6, 2023
Verified Purchase
Purchase this as part of OT after shoulder replacement to strengthen my hand grip. I am the petite
woman and these are very small did not like at all. Returned
3 people found this helpful
Helpful
Report
dale decarlo
2.0 out of 5 stars
Too small
Reviewed in the United States on January 10, 2024
Verified Purchase
The person in the picture must have tiny little hands. These were very small.
Helpful
Report
Robert
3.0 out of 5 stars
excersise ball
Reviewed in the United States on July 5, 2023
Verified Purchase
Image is mis leading. To small. Dont reccomend to buy.
2 people found this helpful
Helpful
Report
Debby
4.0 out of 5 stars
I bought it for me
Reviewed in the United States on December 23, 2022
Verified Purchase
Broke my wrist and need them for therapy
2 people found this helpful
Helpful
Report
Christy
5.0 out of 5 stars
100% helpful
Reviewed in the United States on May 12, 2023
Verified Purchase
Love these. I'm trying to build up wrist/finger strength and these are great way to start. I can use at
desk during work.
One person found this helpful
Helpful
Report
David C. Fischer
2.0 out of 5 stars
Too small
Reviewed in the United States on December 29, 2023
Verified Purchase
Too small to be of much use
Helpful
Report
Kathleen S. Jablonski
4.0 out of 5 stars
Smaller than expected, but a good feel in my hand.
Reviewed in the United States on August 14, 2022
Verified Purchase
Smaller than expected, but a good feel in my hand. I’m not sure I like the sort of sticky feeling to the
gel, but on the overall, I think it’s a great value.
One person found this helpful
Helpful
Report
Brittany Chavarria
5.0 out of 5 stars
Lo recomiendo
Reviewed in the United States on May 15, 2023
Verified Purchase
Las pelotas son de un buen tamaño, tienen diferentes intensidades y es de muy buen material
One person found this helpful
Helpful
Report
Translate review to English
Emily
5.0 out of 5 stars
Makes hands feel better.
Reviewed in the United States on June 18, 2023
Verified Purchase
Using them seems to help my arthritis
Helpful
Report
Sara Martin
5.0 out of 5 stars
Good Product
Reviewed in the United States on June 17, 2023
Verified Purchase
Will use this product in physical therapy
From the United States
Beth
5.0 out of 5 stars
Nice
Reviewed in the United States on June 18, 2023
Verified Purchase
Has improved grip and strength
Helpful
Report
Lee W.
4.0 out of 5 stars
For my RA and carpal tunnel hand exercises
Reviewed in the United States on January 29, 2020
Verified Purchase
What I like: The size is just right for the average women's hands and it has three levels of
resistance-yellow/softer resistance, orange/medium resistance, blue/ harder resistance. Just enough
resistance so that you can press them but not collapse them. Each came in its own little zip lock bag.
What I kinda don't like: Feel weird...They are sticky like those toys my kids use to play with that you
throw at the wall and it sticks, then it slowly 'crawls' back down. So I use it inside of its plastic bag.
Crinkly but works.
22 people found this helpful
Helpful
Report
D. Lefever
5.0 out of 5 stars
Great for weak, elderly hands
Reviewed in the United States on January 9, 2023
Verified Purchase
My doctor said to buy these, and I use occasionally every night while watching TV. Fingers are
stronger and I'm dropping a lot less. Keep away from dogs.
3 people found this helpful
Helpful
Report
Nancy Alameda
5.0 out of 5 stars
Too small
Reviewed in the United States on April 29, 2021
Verified Purchase
I just really like them. I think they’ll be very helpful for my old painful hands.
After having used them for several days I’ve come to the conclusion that they are too small. I’m only
able to squeeze with my first three fingers. My thumb and pinky finger are uninvolved. I will send
them back and have already ordered a different set. I think these would be great for kids, but I don’t
know why kids would need them, unless for an injury.
6 people found this helpful
Helpful
Report
Thuong Le
4.0 out of 5 stars
Good
Reviewed in the United States on April 26, 2022
Verified Purchase
I practiced it every night and it worked. My hand feel better and wasn’t numb when I woke up.
Helpful
Report
JONATHAN V.
5.0 out of 5 stars
Good to have
Reviewed in the United States on May 2, 2023
Verified Purchase
Great to have
One person found this helpful
Helpful
Report
Samuel Moore II
4.0 out of 5 stars
Perfect
Reviewed in the United States on February 12, 2022
Verified Purchase
My father had a stroke in Dec 2021 He lost a little strength in his left hand, these were perfect for
him.
One person found this helpful
Helpful
Report
Tikiroom2435
3.0 out of 5 stars
No chart or label with firmness of each ball. Sticky to the touch. Okay for the price.
Reviewed in the United States on January 8, 2020
Verified Purchase
Ordered these balls for therapy after thumb ligament joint reconstruction surgery for osteoarthritis.
Great price but you get what you pay for. The balls are good size for my small hands but they are
sticky to the touch. The balls have imperfections which i can feel on my skin...weird. Was very
disappointed the balls arrived with no chart or instructions stating the firmness of each color. The
orange and yellow were so similar in firmness, I couldn’t tell which was which. My memory is not the
best but hate I have to keep looking up the chart photo on the Amazon listing to see which is which.
For the price, these are ok for me to start with but I think a cloth covered stress ball work better in my
situation.
8 people found this helpful
Helpful
Report
Litigator Rater
2.0 out of 5 stars
No instructions for use of the product
Reviewed in the United States on April 28, 2023
Verified Purchase
I received three spheres of varying color and density, in a clear cellophane envelope. There were no
instructions for use or maintenance. Inasmuch as these are advertised for exercise, it is unfair that
the promotional instructions are not provided to the buyers of the product. I suppose the only way to
see the ads on Amazon is through screen captures.
Helpful
Report
Isbel feliz
5.0 out of 5 stars
Excelente
Reviewed in the United States on April 20, 2023
Verified Purchase
Que llegaron intactas
From the United States
Robert F Anderson
1.0 out of 5 stars
sticky lint traps that I dont even want to touch!!!
Reviewed in the United States on February 14, 2024
Verified Purchase
sticky lint traps that I dont even want to touch let alone exercise!!! Total waste of money.
Helpful
Report
BILL SKEBECK
5.0 out of 5 stars
Very nice product!
Reviewed in the United States on October 23, 2022
Verified Purchase
Satisfied with product. First package came empty but Amazon customer service immediately
corrected this and sent the order very quickly and got the right package quickly....all good!
One person found this helpful
Helpful
Report
darknology
3.0 out of 5 stars
Gummy Balls
Reviewed in the United States on November 3, 2022
Verified Purchase
They have a gummy/sticky feel, which I find unpleasant. They each have a different consistency - as
advertised. I prefer the 2.5-inch ball that I have. Impressive colors, though.
2 people found this helpful
Helpful
Report
G. Boehm
5.0 out of 5 stars
Received my order a few days ago
Reviewed in the United States on March 14, 2023
Verified Purchase
It was what I wanted
Helpful
Report
all way seen
5.0 out of 5 stars
3 different level of softness. perfect for elders
Reviewed in the United States on February 10, 2023
Verified Purchase
my mother likes these smaller size relief balls.
2 people found this helpful
Helpful
Report
Sharon
3.0 out of 5 stars
VERY SMALL
Reviewed in the United States on July 22, 2021
Verified Purchase
These balls are very small (even for a woman's hands) and they are sticky/slimy at first touch. After
a bit of use (reluctantly) they do "dry up" somewhat. I needed to try them because I couldn't find
"stress balls" anywhere locally and I need them for finger stiffness resulting from a broken wrist. I will
likely return these when I find larger ones to buy from Amazon. Disappointed.
4 people found this helpful
Helpful
Report
Richard B.
3.0 out of 5 stars
Misleading Ad
Reviewed in the United States on February 22, 2022
Verified Purchase
Misleading, certainly shows what looks like a carry bag in the ad, but you don't get one. But the pic
of a carry bag (look alike) swayed the decision to buy it. Why show something that is not included,
unless you wanted to sway a person's choice.
One person found this helpful
Helpful
Report
SFR
4.0 out of 5 stars
Works best for small hands
Reviewed in the United States on December 10, 2021
Verified Purchase
My hands are not small but the balls work okay.
Helpful
Report
Karin M
4.0 out of 5 stars
A decent option
Reviewed in the United States on July 1, 2021
Verified Purchase
I'm not really able to tell a difference in the strength on these, and they are just a bit too small.
Helpful
Report
Kindle Customer
4.0 out of 5 stars
Worth the money
Reviewed in the United States on July 11, 2021
Verified Purchase
These work well for what I needed them for help with my hands that have tendinitis
From the United States
Shmuelman
5.0 out of 5 stars
I thought they would be too small...
Reviewed in the United States on September 1, 2022
Verified Purchase
but when I started using them they are just right. Very comfortable and addictive to use.
Helpful
Report
Grace Laine
4.0 out of 5 stars
Addictive Therapy
Reviewed in the United States on August 24, 2020
Verified Purchase
I need these for numbness in my hands and fingers and use them habitually, either squeezing them
or rolling them in my palm for dexterity. There's a slight difference in thickness - mostly felt in the
blue ball. They're addictive and helpful.
One person found this helpful
Helpful
Report
WildWest
5.0 out of 5 stars
Do the job
Reviewed in the United States on November 27, 2021
Verified Purchase
Price point was great; definitely very different firmness. I used these after a bicep tendon
reattachment and had the three for only a bit more than the kids tennis ball my physical therapist
recommended.
Helpful
Report
ARMANDO BALTAZAR
4.0 out of 5 stars
Too small for a mans hand
Reviewed in the United States on September 9, 2021
Verified Purchase
The balls are too small for a mans hand
Helpful
Report
mnt
5.0 out of 5 stars
these are great
Reviewed in the United States on April 26, 2021
Verified Purchase
Only drawback is they don't come with the instructions for different exercises. Balls are nicely made
and a great substance. Just started with the yellow, which is lightest resistance but appreciate
having the others to upgrade to appropriately. They feel good to the touch.
2 people found this helpful
Helpful
Report
SILKOAK
5.0 out of 5 stars
good prodict
Reviewed in the United States on February 15, 2022
Verified Purchase
I ordered the balls to exercise my arthritic fingers and i do this numerous times a day. It will take
awhile but hope it helps.
Helpful
Report
Rainey
5.0 out of 5 stars
Hand therapeutic exercise balls
Reviewed in the United States on November 19, 2022
Verified Purchase
These are just as good as the Gaiam products.
One person found this helpful
Helpful
Report
LZee
5.0 out of 5 stars
Awesome
Reviewed in the United States on May 30, 2022
Verified Purchase
My Mom uses it for her arthritis. Her massage therapist had great comments about it. Mom is happy
Helpful
Report
Vince D
5.0 out of 5 stars
Does the job
Reviewed in the United States on October 12, 2021
Verified Purchase
I see reviews stating that there’s not much of a difference in resistance between the three. There’s a
significant difference to someone rehabbing a hand injury. Well worth trying for the price.
Helpful
Report
Mileyka
5.0 out of 5 stars
Muy prácticas
Reviewed in the United States on February 20, 2022
Verified Purchase
Buena inversión porque no son muy grandes. Que se pueden llevar para cualquier lugar y así
mantener ejercitadas las manos y dedos.
From the United States
Sue
4.0 out of 5 stars
it works great for my needs
Reviewed in the United States on March 25, 2021
Verified Purchase
I like that it fits in my hands perfectly. Just firm enough to work my hands.
Helpful
Report
L. Key
5.0 out of 5 stars
Exercise for broken wrist
Reviewed in the United States on September 14, 2021
Verified Purchase
These are great to help a broken wrist heal! My wrist stopped hurting after I started using the ball! I
highly recommend these to anyone who has broken their wrist!!
Helpful
Report
Lorie
5.0 out of 5 stars
These
Reviewed in the United States on September 3, 2021
Verified Purchase
These balls are so good to use because I have rheumatoid arthritis and it helps my hands so much. I
need to strengthen my hands and this has helped so much.
Helpful
Report
Amazon Customer
5.0 out of 5 stars
Love them!
Reviewed in the United States on November 11, 2020
Verified Purchase
A teacher I work with had one and didn't know where to find it- I lucked up and these are exactly the
same. I like this because it doesn't seem like you can break them, without actively using some sharp
to do so. The middle schoolers I work with love using these!
Helpful
Report
J G Stamps
5.0 out of 5 stars
Great non slippery squeeze balls in bright colors
Reviewed in the United States on December 18, 2020
Verified Purchase
Bought these for my elderly mom who had a stroke and wanted to re-teach her left hand to grip.
These are perfect for her, not slippery, brightly colored, and progressive strengths. Anybody wanting
to build up grip and forearms will enjoy. Also stress relieving in 2020.
One person found this helpful
Helpful
Report
Betty C. Shaheen
5.0 out of 5 stars
Therapy for hand
Reviewed in the United States on July 26, 2022
Verified Purchase
Good for therapy on hand.. Just right size for my hand.
Helpful
Report
J. Hatch
3.0 out of 5 stars
Too small
Reviewed in the United States on March 12, 2022
Verified Purchase
The balls seem to be good quality but they should be bigger to engage all fingers and thumb
Helpful
Report
Kimmy in MD
5.0 out of 5 stars
Great Exercise Tool!
Reviewed in the United States on August 27, 2022
Verified Purchase
Love these bands for working legs and glutes!
Helpful
Report
May
5.0 out of 5 stars
Good therapeutic item
Reviewed in the United States on July 6, 2021
Verified Purchase
Perfect item for my own home PT therapy . If you have had a broken hand in past or now, get this
item to help with the therapy healing process
Helpful
Report
Denise
3.0 out of 5 stars
All the same?
Reviewed in the United States on September 2, 2021
Verified Purchase
Purchased these for a family member in rehab. I could not determine the different resistance levels
they all felt the same. In the end he didn't use.
Helpful
Report
From the United States
Frank
4.0 out of 5 stars
Good product
Reviewed in the United States on May 20, 2021
Verified Purchase
Good product. Very useful.
Helpful
Report
Alicia G
5.0 out of 5 stars
Good
Reviewed in the United States on September 10, 2022
Verified Purchase
Good exercise motivation
Helpful
Report
DB
4.0 out of 5 stars
good
Reviewed in the United States on June 12, 2021
Verified Purchase
worked well
Helpful
Report
NonnaVO
5.0 out of 5 stars
Just what my husband was looking for
Reviewed in the United States on March 12, 2022
Verified Purchase
Good value for the cost. Helpful with exercise of arthritic hands
Helpful
Report
LW
3.0 out of 5 stars
They work price is good.
Reviewed in the United States on June 17, 2021
Verified Purchase
They aren't marked so you know which size is the easiest to the hardest. Which makes it hard to
know if you are using the right one.
Helpful
Report
Barabara Sagraves
5.0 out of 5 stars
Great for hand exercise
Reviewed in the United States on September 18, 2021
Verified Purchase
Husband has had shoulder surgery. These have kept his hand from swelling because he can’t move
his shoulder or arm.
Helpful
Report
Cindylou
3.0 out of 5 stars
Okay
Reviewed in the United States on April 26, 2022
Verified Purchase
I was looking for something softer
Helpful
Report
Alan
5.0 out of 5 stars
These are just what I was looking for. The size is just right and they are easy to use.
Reviewed in the United States on September 13, 2021
Verified Purchase
These are just what I was looking for. The size is just right and they are easy to use.
Helpful
Report
Fran
4.0 out of 5 stars
Great hand massage
Reviewed in the United States on April 10, 2021
Verified Purchase
Great for arthritic hands
Helpful
Report
2004done
2.0 out of 5 stars
3 of the same
Reviewed in the United States on January 9, 2021
Verified Purchase
Not much difference in the three,, unless you don't like the color. Trying to rehab myself from a
broken wrist, so practicing juggling is a fun part of it ( no, I can't juggle any longer, but couldn't before
either as the saying goes). I AM able to deflect with fingertips' strength now, so it is working. I use a
rolled up towel for flexing (which I thought these would work), but these are only for strength
exercise. Can't really recommend them, other than for juggling (they're much better than using
eggs).
From the United States
Karenv
5.0 out of 5 stars
Great size and good resistance
Reviewed in the United States on August 10, 2020
Verified Purchase
These stress balls are smaller than I expected but they are actually perfect for my hand.
The increasingly hard resistance is just what I need to strengthen my hand after a fracture.
Helpful
Report
Jose V.
4.0 out of 5 stars
good quality product for this price
Reviewed in the United States on July 4, 2020
Verified Purchase
Nice and easy to use. Good quality to this price
Helpful
Report
Mark Ashworth
3.0 out of 5 stars
Too small for my hands
Reviewed in the United States on January 31, 2021
Verified Purchase
I like the variation in resistance but they are too small for my hands which are not very large. I have
to use two balls at a time which is awkward.
Helpful
Report
i m irene
5.0 out of 5 stars
Good for rehab in broken arm
Reviewed in the United States on November 27, 2021
Verified Purchase
Do not let animals get this. It is not a toy
Helpful
Report
Nelson
5.0 out of 5 stars
Strength ball
Reviewed in the United States on March 16, 2022
Verified Purchase
Fix in my plan very easily
Helpful
Report
dave ratalsky
5.0 out of 5 stars
Good
Reviewed in the United States on August 7, 2021
Verified Purchase
They’re round and squeezable. They do what they were made for. Enough said.
Helpful
Report
rochelle conner
5.0 out of 5 stars
good fit
Reviewed in the United States on April 27, 2022
Verified Purchase
none
Helpful
Report
Bob D Weakley
5.0 out of 5 stars
They are just I was looking for and I expected
Reviewed in the United States on June 9, 2021
Verified Purchase
I like the size of them and how easy to always have one on all the time.
Helpful
Report
Drew
4.0 out of 5 stars
Good
Reviewed in the United States on October 30, 2020
Verified Purchase
They do the job
Helpful
Report
GL
5.0 out of 5 stars
They do make a difference
Reviewed in the United States on March 30, 2021
Verified Purchase
When you do the exercises everyday there is a sizable difference. Also, just squeezing the ball is a
good stress reliever
From the United States
Robert E Gauldin
5.0 out of 5 stars
Great exercise balls.
Reviewed in the United States on August 4, 2020
Verified Purchase
I find the useful for hand exercises. They do feel a bit sticky but don't seem to p pick up any dirt. I'm
very pleased with them.
Helpful
Report
DebbieA
5.0 out of 5 stars
Perfect in every way , and great to get hands strengthened
Reviewed in the United States on September 4, 2019
Verified Purchase
Perfect size, squeeze resistance, and can use for hours to help add dexterity to weakened hands! I
would prefer that they all came in one zip top bag though, but overall these balls rock!!
8 people found this helpful
Helpful
Report
Barbara
5.0 out of 5 stars
very effective
Reviewed in the United States on July 3, 2021
Verified Purchase
The balls are very helpful for an exercise for my arthritic and neuropathy hands.
Helpful
Report
K. Johansen
2.0 out of 5 stars
Not recommended
Reviewed in the United States on June 22, 2021
Verified Purchase
Got these and was surprised at how small they are, so small that I doubt they would even be good
for a kid. The difference in tension is also pretty bad, not much difference at all. Of course these are
made in china. Will go back to the devices I was using, thought maybe these would be good, but I do
not recommend them
One person found this helpful
Helpful
Report
James P. Bontrager
3.0 out of 5 stars
Way to much wrapping!
Reviewed in the United States on December 29, 2021
Verified Purchase
Average
Helpful
Report
Anthony
5.0 out of 5 stars
Great for rehabilitation of the hand.
Reviewed in the United States on October 10, 2020
Verified Purchase
I bought these for my mother after she broke her wrist so she could rebuild strength in her hand and
she loves them.
Helpful
Report
Jesse
5.0 out of 5 stars
Get them
Reviewed in the United States on March 17, 2021
Verified Purchase
Just had carpal tunnel surgery and this is getting my hand back to strength fast.
Helpful
Report
adonais d.
5.0 out of 5 stars
Están muy colada lo recomiendo
Reviewed in the United States on August 14, 2021
Verified Purchase
Me gusto muy suave para mis mano lo recomiendo
Helpful
Report
Translate review to English
stephanie D
5.0 out of 5 stars
I haven’t used the balls very long, but they seem to help pain.
Reviewed in the United States on April 1, 2020
Verified Purchase
I am using the exercise balls to relieve the arthritis in my hands. I have trigger fingers on both hands
and the exercise seems to help.
One person found this helpful
Helpful
Report
Customer 777
2.0 out of 5 stars
Easy to bite in half for child or dementia patient so be careful
Reviewed in the United States on November 16, 2022
Verified Purchase
Easy to bite Chunks out be careful not for children or confused elderly | You must generate a response using only this provided document. Do not use any other outside source to support your claims. If you are unable to answer the request using the supporting document only, then you must respond with "please support more relevant documents so that I may answer your request accurately".
What do the ratings say that are 2 stars and below?
Top positive review
Positive reviews›
Jodi P
5.0 out of 5 stars
Is as described
Reviewed in the United States on December 14, 2023
Like the balls, good for exercising fingers. A bit small for full hand workout
3 people found this helpful
Top critical review
Critical reviews›
Bonnie Rosenstock
3.0 out of 5 stars
Not very substantial
Reviewed in the United States on November 23, 2023
Too small. So not very good workout.
2 people found this helpful
Search
SORT BY
Top reviewsMost recent
Top reviews
FILTER BY
All reviewersVerified purchase only
All reviewers
All stars5 star only4 star only3 star only2 star only1 star onlyPositive reviewsCritical reviews
All stars
Text, image, videoImage and video reviews only
Text, image, video
3,286 total ratings, 194 with reviews
From the United States
Jodi P
5.0 out of 5 stars
Is as described
Reviewed in the United States on December 14, 2023
Verified Purchase
Like the balls, good for exercising fingers. A bit small for full hand workout
3 people found this helpful
Helpful
Report
Jesse B
5.0 out of 5 stars
Great exercise for your hands
Reviewed in the United States on January 29, 2024
Verified Purchase
Have a little arthritis in both hands, and I use the balls to exercise my grip. Works great.
Helpful
Report
Ronda Sasser
4.0 out of 5 stars
Good for PT
Reviewed in the United States on September 10, 2023
Verified Purchase
Good for strength training your hands after shoulder surgery.
Helpful
Report
Marie Skinner
5.0 out of 5 stars
Just what i was looking for.
Reviewed in the United States on January 6, 2024
Verified Purchase
As a massage therapist, i use my hands a lot. I got these balls to strengthen them. The balls are
easy to use.
Helpful
Report
Bonnie Rosenstock
3.0 out of 5 stars
Not very substantial
Reviewed in the United States on November 23, 2023
Verified Purchase
Too small. So not very good workout.
2 people found this helpful
Helpful
Report
Paul Gabriel Wiener
5.0 out of 5 stars
They do what they're supposed to do
Reviewed in the United States on September 17, 2022
Verified Purchase
Set of 3 squeeze balls. Yellow is pretty soft, orange is moderately firm, and blue is kind of tough.
They've got a good texture. Just rough enough to have some grip without being irritating to hold.
They helped strengthen my arms in preparation for some IV treatment, and they're also just fun to
squeeze. They'd make good juggling practice balls, too, if you're into that.
7 people found this helpful
Helpful
Report
E. Nawrocki
5.0 out of 5 stars
A little sticky at first
Reviewed in the United States on August 30, 2023
Verified Purchase
These were a little sticky at first but got better during use. Helped with my hands that had some
ligament damage.
One person found this helpful
Helpful
Report
DianaQ
5.0 out of 5 stars
Great Squishy Balls
Reviewed in the United States on August 5, 2022
Verified Purchase
Broke my arm in three places and wound up with a big, purple, swollen hand. Surgeon suggested
this type of hand exercise to get my hand back to normal. I have poor circulation in the other hand
(goes to sleep easily) so now I do two-handed squishy ball squeezes as I watch TV in the evening.
It’s clearly benefiting both hands! Good value for the money spent. Zippered case keeps them clean.
Don’t know why anyone would need to spend more on exercise balls like these.
3 people found this helpful
Helpful
Report
Richard Lyda
4.0 out of 5 stars
Squeeze balls
Reviewed in the United States on July 25, 2023
Verified Purchase
They are squeeze balls for medical purposes
They squeeze what can I say
Helpful
Report
Prairie Gal
3.0 out of 5 stars
Just ok
Reviewed in the United States on November 2, 2023
Verified Purchase
There was no indication of the colors and resistance levels and it is very hard to feel the difference!
Ok for the money paid!
One person found this helpful
From the United States
Wesismore
2.0 out of 5 stars
Not what I wanted
Reviewed in the United States on January 31, 2024
Verified Purchase
These feel cheap. They say that there are 3 levels of resistence which is nonsense. Both I and my
mother who I bought these for, couldn't tell/feel the differences among them. Also, they say they are
2 inches across, they are not. They measure smaller and feel as such in ones hand. I am returning
for a refund.
Helpful
Report
Norine McDonald Tepas
4.0 out of 5 stars
PT
Reviewed in the United States on July 16, 2023
Verified Purchase
Suggested by my Doctor and PT
Helpful
Report
J. Smith
4.0 out of 5 stars
Different strengths are great
Reviewed in the United States on April 30, 2023
Verified Purchase
I like the idea I can have the option of the different strengths. I wish they were a little bit bigger. I
have osteoarthritis in my fingers and the stress balls really help.
2 people found this helpful
Helpful
Report
Marie
4.0 out of 5 stars
Stress Balls
Reviewed in the United States on June 28, 2023
Verified Purchase
They are Ok
Helpful
Report
Francisco
4.0 out of 5 stars
Quite good
Reviewed in the United States on May 13, 2023
Verified Purchase
Pretty happy with them. Wish they were bigger, but otherwise got what I wanted
2 people found this helpful
Helpful
Report
Angela C. Adams
5.0 out of 5 stars
soft
Reviewed in the United States on October 4, 2023
Verified Purchase
easy to use
One person found this helpful
Helpful
Report
Angela K.
4.0 out of 5 stars
Smaller than expected
Reviewed in the United States on February 21, 2023
Verified Purchase
Like the material. It’s easy to grip and not slippery. Many options for hand and finger strengthening
2 people found this helpful
Helpful
Report
Charles L.
4.0 out of 5 stars
A bit small for a woman's hand
Reviewed in the United States on February 20, 2023
Verified Purchase
A bit small to do physical therapy for an average woman's hand, but otherwise very good.
3 people found this helpful
Helpful
Report
Debora Vardeman
5.0 out of 5 stars
Our Grand dogs love them
Reviewed in the United States on March 23, 2023
Verified Purchase
We buy these for our grand dogs as they are small enough for them to grab by the mouth and bring
back to us. Due to what they are made of, the dogs can not tear them apart. We also have a niece
dog that visits and she goes nuts over them. Very well made.
Helpful
Report
Maureen
5.0 out of 5 stars
3 firmness levels…works great!
Reviewed in the United States on August 20, 2023
Verified Purchase
I used this for exercising my hand. Loved that the colors correspond to the firmness levels.
3 people found this helpful
From the United States
Sharon DeLorenzo
3.0 out of 5 stars
Very small
Reviewed in the United States on June 6, 2023
Verified Purchase
Purchase this as part of OT after shoulder replacement to strengthen my hand grip. I am the petite
woman and these are very small did not like at all. Returned
3 people found this helpful
Helpful
Report
dale decarlo
2.0 out of 5 stars
Too small
Reviewed in the United States on January 10, 2024
Verified Purchase
The person in the picture must have tiny little hands. These were very small.
Helpful
Report
Robert
3.0 out of 5 stars
excersise ball
Reviewed in the United States on July 5, 2023
Verified Purchase
Image is mis leading. To small. Dont reccomend to buy.
2 people found this helpful
Helpful
Report
Debby
4.0 out of 5 stars
I bought it for me
Reviewed in the United States on December 23, 2022
Verified Purchase
Broke my wrist and need them for therapy
2 people found this helpful
Helpful
Report
Christy
5.0 out of 5 stars
100% helpful
Reviewed in the United States on May 12, 2023
Verified Purchase
Love these. I'm trying to build up wrist/finger strength and these are great way to start. I can use at
desk during work.
One person found this helpful
Helpful
Report
David C. Fischer
2.0 out of 5 stars
Too small
Reviewed in the United States on December 29, 2023
Verified Purchase
Too small to be of much use
Helpful
Report
Kathleen S. Jablonski
4.0 out of 5 stars
Smaller than expected, but a good feel in my hand.
Reviewed in the United States on August 14, 2022
Verified Purchase
Smaller than expected, but a good feel in my hand. I’m not sure I like the sort of sticky feeling to the
gel, but on the overall, I think it’s a great value.
One person found this helpful
Helpful
Report
Brittany Chavarria
5.0 out of 5 stars
Lo recomiendo
Reviewed in the United States on May 15, 2023
Verified Purchase
Las pelotas son de un buen tamaño, tienen diferentes intensidades y es de muy buen material
One person found this helpful
Helpful
Report
Translate review to English
Emily
5.0 out of 5 stars
Makes hands feel better.
Reviewed in the United States on June 18, 2023
Verified Purchase
Using them seems to help my arthritis
Helpful
Report
Sara Martin
5.0 out of 5 stars
Good Product
Reviewed in the United States on June 17, 2023
Verified Purchase
Will use this product in physical therapy
From the United States
Beth
5.0 out of 5 stars
Nice
Reviewed in the United States on June 18, 2023
Verified Purchase
Has improved grip and strength
Helpful
Report
Lee W.
4.0 out of 5 stars
For my RA and carpal tunnel hand exercises
Reviewed in the United States on January 29, 2020
Verified Purchase
What I like: The size is just right for the average women's hands and it has three levels of
resistance-yellow/softer resistance, orange/medium resistance, blue/ harder resistance. Just enough
resistance so that you can press them but not collapse them. Each came in its own little zip lock bag.
What I kinda don't like: Feel weird...They are sticky like those toys my kids use to play with that you
throw at the wall and it sticks, then it slowly 'crawls' back down. So I use it inside of its plastic bag.
Crinkly but works.
22 people found this helpful
Helpful
Report
D. Lefever
5.0 out of 5 stars
Great for weak, elderly hands
Reviewed in the United States on January 9, 2023
Verified Purchase
My doctor said to buy these, and I use occasionally every night while watching TV. Fingers are
stronger and I'm dropping a lot less. Keep away from dogs.
3 people found this helpful
Helpful
Report
Nancy Alameda
5.0 out of 5 stars
Too small
Reviewed in the United States on April 29, 2021
Verified Purchase
I just really like them. I think they’ll be very helpful for my old painful hands.
After having used them for several days I’ve come to the conclusion that they are too small. I’m only
able to squeeze with my first three fingers. My thumb and pinky finger are uninvolved. I will send
them back and have already ordered a different set. I think these would be great for kids, but I don’t
know why kids would need them, unless for an injury.
6 people found this helpful
Helpful
Report
Thuong Le
4.0 out of 5 stars
Good
Reviewed in the United States on April 26, 2022
Verified Purchase
I practiced it every night and it worked. My hand feel better and wasn’t numb when I woke up.
Helpful
Report
JONATHAN V.
5.0 out of 5 stars
Good to have
Reviewed in the United States on May 2, 2023
Verified Purchase
Great to have
One person found this helpful
Helpful
Report
Samuel Moore II
4.0 out of 5 stars
Perfect
Reviewed in the United States on February 12, 2022
Verified Purchase
My father had a stroke in Dec 2021 He lost a little strength in his left hand, these were perfect for
him.
One person found this helpful
Helpful
Report
Tikiroom2435
3.0 out of 5 stars
No chart or label with firmness of each ball. Sticky to the touch. Okay for the price.
Reviewed in the United States on January 8, 2020
Verified Purchase
Ordered these balls for therapy after thumb ligament joint reconstruction surgery for osteoarthritis.
Great price but you get what you pay for. The balls are good size for my small hands but they are
sticky to the touch. The balls have imperfections which i can feel on my skin...weird. Was very
disappointed the balls arrived with no chart or instructions stating the firmness of each color. The
orange and yellow were so similar in firmness, I couldn’t tell which was which. My memory is not the
best but hate I have to keep looking up the chart photo on the Amazon listing to see which is which.
For the price, these are ok for me to start with but I think a cloth covered stress ball work better in my
situation.
8 people found this helpful
Helpful
Report
Litigator Rater
2.0 out of 5 stars
No instructions for use of the product
Reviewed in the United States on April 28, 2023
Verified Purchase
I received three spheres of varying color and density, in a clear cellophane envelope. There were no
instructions for use or maintenance. Inasmuch as these are advertised for exercise, it is unfair that
the promotional instructions are not provided to the buyers of the product. I suppose the only way to
see the ads on Amazon is through screen captures.
Helpful
Report
Isbel feliz
5.0 out of 5 stars
Excelente
Reviewed in the United States on April 20, 2023
Verified Purchase
Que llegaron intactas
From the United States
Robert F Anderson
1.0 out of 5 stars
sticky lint traps that I dont even want to touch!!!
Reviewed in the United States on February 14, 2024
Verified Purchase
sticky lint traps that I dont even want to touch let alone exercise!!! Total waste of money.
Helpful
Report
BILL SKEBECK
5.0 out of 5 stars
Very nice product!
Reviewed in the United States on October 23, 2022
Verified Purchase
Satisfied with product. First package came empty but Amazon customer service immediately
corrected this and sent the order very quickly and got the right package quickly....all good!
One person found this helpful
Helpful
Report
darknology
3.0 out of 5 stars
Gummy Balls
Reviewed in the United States on November 3, 2022
Verified Purchase
They have a gummy/sticky feel, which I find unpleasant. They each have a different consistency - as
advertised. I prefer the 2.5-inch ball that I have. Impressive colors, though.
2 people found this helpful
Helpful
Report
G. Boehm
5.0 out of 5 stars
Received my order a few days ago
Reviewed in the United States on March 14, 2023
Verified Purchase
It was what I wanted
Helpful
Report
all way seen
5.0 out of 5 stars
3 different level of softness. perfect for elders
Reviewed in the United States on February 10, 2023
Verified Purchase
my mother likes these smaller size relief balls.
2 people found this helpful
Helpful
Report
Sharon
3.0 out of 5 stars
VERY SMALL
Reviewed in the United States on July 22, 2021
Verified Purchase
These balls are very small (even for a woman's hands) and they are sticky/slimy at first touch. After
a bit of use (reluctantly) they do "dry up" somewhat. I needed to try them because I couldn't find
"stress balls" anywhere locally and I need them for finger stiffness resulting from a broken wrist. I will
likely return these when I find larger ones to buy from Amazon. Disappointed.
4 people found this helpful
Helpful
Report
Richard B.
3.0 out of 5 stars
Misleading Ad
Reviewed in the United States on February 22, 2022
Verified Purchase
Misleading, certainly shows what looks like a carry bag in the ad, but you don't get one. But the pic
of a carry bag (look alike) swayed the decision to buy it. Why show something that is not included,
unless you wanted to sway a person's choice.
One person found this helpful
Helpful
Report
SFR
4.0 out of 5 stars
Works best for small hands
Reviewed in the United States on December 10, 2021
Verified Purchase
My hands are not small but the balls work okay.
Helpful
Report
Karin M
4.0 out of 5 stars
A decent option
Reviewed in the United States on July 1, 2021
Verified Purchase
I'm not really able to tell a difference in the strength on these, and they are just a bit too small.
Helpful
Report
Kindle Customer
4.0 out of 5 stars
Worth the money
Reviewed in the United States on July 11, 2021
Verified Purchase
These work well for what I needed them for help with my hands that have tendinitis
From the United States
Shmuelman
5.0 out of 5 stars
I thought they would be too small...
Reviewed in the United States on September 1, 2022
Verified Purchase
but when I started using them they are just right. Very comfortable and addictive to use.
Helpful
Report
Grace Laine
4.0 out of 5 stars
Addictive Therapy
Reviewed in the United States on August 24, 2020
Verified Purchase
I need these for numbness in my hands and fingers and use them habitually, either squeezing them
or rolling them in my palm for dexterity. There's a slight difference in thickness - mostly felt in the
blue ball. They're addictive and helpful.
One person found this helpful
Helpful
Report
WildWest
5.0 out of 5 stars
Do the job
Reviewed in the United States on November 27, 2021
Verified Purchase
Price point was great; definitely very different firmness. I used these after a bicep tendon
reattachment and had the three for only a bit more than the kids tennis ball my physical therapist
recommended.
Helpful
Report
ARMANDO BALTAZAR
4.0 out of 5 stars
Too small for a mans hand
Reviewed in the United States on September 9, 2021
Verified Purchase
The balls are too small for a mans hand
Helpful
Report
mnt
5.0 out of 5 stars
these are great
Reviewed in the United States on April 26, 2021
Verified Purchase
Only drawback is they don't come with the instructions for different exercises. Balls are nicely made
and a great substance. Just started with the yellow, which is lightest resistance but appreciate
having the others to upgrade to appropriately. They feel good to the touch.
2 people found this helpful
Helpful
Report
SILKOAK
5.0 out of 5 stars
good prodict
Reviewed in the United States on February 15, 2022
Verified Purchase
I ordered the balls to exercise my arthritic fingers and i do this numerous times a day. It will take
awhile but hope it helps.
Helpful
Report
Rainey
5.0 out of 5 stars
Hand therapeutic exercise balls
Reviewed in the United States on November 19, 2022
Verified Purchase
These are just as good as the Gaiam products.
One person found this helpful
Helpful
Report
LZee
5.0 out of 5 stars
Awesome
Reviewed in the United States on May 30, 2022
Verified Purchase
My Mom uses it for her arthritis. Her massage therapist had great comments about it. Mom is happy
Helpful
Report
Vince D
5.0 out of 5 stars
Does the job
Reviewed in the United States on October 12, 2021
Verified Purchase
I see reviews stating that there’s not much of a difference in resistance between the three. There’s a
significant difference to someone rehabbing a hand injury. Well worth trying for the price.
Helpful
Report
Mileyka
5.0 out of 5 stars
Muy prácticas
Reviewed in the United States on February 20, 2022
Verified Purchase
Buena inversión porque no son muy grandes. Que se pueden llevar para cualquier lugar y así
mantener ejercitadas las manos y dedos.
From the United States
Sue
4.0 out of 5 stars
it works great for my needs
Reviewed in the United States on March 25, 2021
Verified Purchase
I like that it fits in my hands perfectly. Just firm enough to work my hands.
Helpful
Report
L. Key
5.0 out of 5 stars
Exercise for broken wrist
Reviewed in the United States on September 14, 2021
Verified Purchase
These are great to help a broken wrist heal! My wrist stopped hurting after I started using the ball! I
highly recommend these to anyone who has broken their wrist!!
Helpful
Report
Lorie
5.0 out of 5 stars
These
Reviewed in the United States on September 3, 2021
Verified Purchase
These balls are so good to use because I have rheumatoid arthritis and it helps my hands so much. I
need to strengthen my hands and this has helped so much.
Helpful
Report
Amazon Customer
5.0 out of 5 stars
Love them!
Reviewed in the United States on November 11, 2020
Verified Purchase
A teacher I work with had one and didn't know where to find it- I lucked up and these are exactly the
same. I like this because it doesn't seem like you can break them, without actively using some sharp
to do so. The middle schoolers I work with love using these!
Helpful
Report
J G Stamps
5.0 out of 5 stars
Great non slippery squeeze balls in bright colors
Reviewed in the United States on December 18, 2020
Verified Purchase
Bought these for my elderly mom who had a stroke and wanted to re-teach her left hand to grip.
These are perfect for her, not slippery, brightly colored, and progressive strengths. Anybody wanting
to build up grip and forearms will enjoy. Also stress relieving in 2020.
One person found this helpful
Helpful
Report
Betty C. Shaheen
5.0 out of 5 stars
Therapy for hand
Reviewed in the United States on July 26, 2022
Verified Purchase
Good for therapy on hand.. Just right size for my hand.
Helpful
Report
J. Hatch
3.0 out of 5 stars
Too small
Reviewed in the United States on March 12, 2022
Verified Purchase
The balls seem to be good quality but they should be bigger to engage all fingers and thumb
Helpful
Report
Kimmy in MD
5.0 out of 5 stars
Great Exercise Tool!
Reviewed in the United States on August 27, 2022
Verified Purchase
Love these bands for working legs and glutes!
Helpful
Report
May
5.0 out of 5 stars
Good therapeutic item
Reviewed in the United States on July 6, 2021
Verified Purchase
Perfect item for my own home PT therapy . If you have had a broken hand in past or now, get this
item to help with the therapy healing process
Helpful
Report
Denise
3.0 out of 5 stars
All the same?
Reviewed in the United States on September 2, 2021
Verified Purchase
Purchased these for a family member in rehab. I could not determine the different resistance levels
they all felt the same. In the end he didn't use.
Helpful
Report
From the United States
Frank
4.0 out of 5 stars
Good product
Reviewed in the United States on May 20, 2021
Verified Purchase
Good product. Very useful.
Helpful
Report
Alicia G
5.0 out of 5 stars
Good
Reviewed in the United States on September 10, 2022
Verified Purchase
Good exercise motivation
Helpful
Report
DB
4.0 out of 5 stars
good
Reviewed in the United States on June 12, 2021
Verified Purchase
worked well
Helpful
Report
NonnaVO
5.0 out of 5 stars
Just what my husband was looking for
Reviewed in the United States on March 12, 2022
Verified Purchase
Good value for the cost. Helpful with exercise of arthritic hands
Helpful
Report
LW
3.0 out of 5 stars
They work price is good.
Reviewed in the United States on June 17, 2021
Verified Purchase
They aren't marked so you know which size is the easiest to the hardest. Which makes it hard to
know if you are using the right one.
Helpful
Report
Barabara Sagraves
5.0 out of 5 stars
Great for hand exercise
Reviewed in the United States on September 18, 2021
Verified Purchase
Husband has had shoulder surgery. These have kept his hand from swelling because he can’t move
his shoulder or arm.
Helpful
Report
Cindylou
3.0 out of 5 stars
Okay
Reviewed in the United States on April 26, 2022
Verified Purchase
I was looking for something softer
Helpful
Report
Alan
5.0 out of 5 stars
These are just what I was looking for. The size is just right and they are easy to use.
Reviewed in the United States on September 13, 2021
Verified Purchase
These are just what I was looking for. The size is just right and they are easy to use.
Helpful
Report
Fran
4.0 out of 5 stars
Great hand massage
Reviewed in the United States on April 10, 2021
Verified Purchase
Great for arthritic hands
Helpful
Report
2004done
2.0 out of 5 stars
3 of the same
Reviewed in the United States on January 9, 2021
Verified Purchase
Not much difference in the three,, unless you don't like the color. Trying to rehab myself from a
broken wrist, so practicing juggling is a fun part of it ( no, I can't juggle any longer, but couldn't before
either as the saying goes). I AM able to deflect with fingertips' strength now, so it is working. I use a
rolled up towel for flexing (which I thought these would work), but these are only for strength
exercise. Can't really recommend them, other than for juggling (they're much better than using
eggs).
From the United States
Karenv
5.0 out of 5 stars
Great size and good resistance
Reviewed in the United States on August 10, 2020
Verified Purchase
These stress balls are smaller than I expected but they are actually perfect for my hand.
The increasingly hard resistance is just what I need to strengthen my hand after a fracture.
Helpful
Report
Jose V.
4.0 out of 5 stars
good quality product for this price
Reviewed in the United States on July 4, 2020
Verified Purchase
Nice and easy to use. Good quality to this price
Helpful
Report
Mark Ashworth
3.0 out of 5 stars
Too small for my hands
Reviewed in the United States on January 31, 2021
Verified Purchase
I like the variation in resistance but they are too small for my hands which are not very large. I have
to use two balls at a time which is awkward.
Helpful
Report
i m irene
5.0 out of 5 stars
Good for rehab in broken arm
Reviewed in the United States on November 27, 2021
Verified Purchase
Do not let animals get this. It is not a toy
Helpful
Report
Nelson
5.0 out of 5 stars
Strength ball
Reviewed in the United States on March 16, 2022
Verified Purchase
Fix in my plan very easily
Helpful
Report
dave ratalsky
5.0 out of 5 stars
Good
Reviewed in the United States on August 7, 2021
Verified Purchase
They’re round and squeezable. They do what they were made for. Enough said.
Helpful
Report
rochelle conner
5.0 out of 5 stars
good fit
Reviewed in the United States on April 27, 2022
Verified Purchase
none
Helpful
Report
Bob D Weakley
5.0 out of 5 stars
They are just I was looking for and I expected
Reviewed in the United States on June 9, 2021
Verified Purchase
I like the size of them and how easy to always have one on all the time.
Helpful
Report
Drew
4.0 out of 5 stars
Good
Reviewed in the United States on October 30, 2020
Verified Purchase
They do the job
Helpful
Report
GL
5.0 out of 5 stars
They do make a difference
Reviewed in the United States on March 30, 2021
Verified Purchase
When you do the exercises everyday there is a sizable difference. Also, just squeezing the ball is a
good stress reliever
From the United States
Robert E Gauldin
5.0 out of 5 stars
Great exercise balls.
Reviewed in the United States on August 4, 2020
Verified Purchase
I find the useful for hand exercises. They do feel a bit sticky but don't seem to p pick up any dirt. I'm
very pleased with them.
Helpful
Report
DebbieA
5.0 out of 5 stars
Perfect in every way , and great to get hands strengthened
Reviewed in the United States on September 4, 2019
Verified Purchase
Perfect size, squeeze resistance, and can use for hours to help add dexterity to weakened hands! I
would prefer that they all came in one zip top bag though, but overall these balls rock!!
8 people found this helpful
Helpful
Report
Barbara
5.0 out of 5 stars
very effective
Reviewed in the United States on July 3, 2021
Verified Purchase
The balls are very helpful for an exercise for my arthritic and neuropathy hands.
Helpful
Report
K. Johansen
2.0 out of 5 stars
Not recommended
Reviewed in the United States on June 22, 2021
Verified Purchase
Got these and was surprised at how small they are, so small that I doubt they would even be good
for a kid. The difference in tension is also pretty bad, not much difference at all. Of course these are
made in china. Will go back to the devices I was using, thought maybe these would be good, but I do
not recommend them
One person found this helpful
Helpful
Report
James P. Bontrager
3.0 out of 5 stars
Way to much wrapping!
Reviewed in the United States on December 29, 2021
Verified Purchase
Average
Helpful
Report
Anthony
5.0 out of 5 stars
Great for rehabilitation of the hand.
Reviewed in the United States on October 10, 2020
Verified Purchase
I bought these for my mother after she broke her wrist so she could rebuild strength in her hand and
she loves them.
Helpful
Report
Jesse
5.0 out of 5 stars
Get them
Reviewed in the United States on March 17, 2021
Verified Purchase
Just had carpal tunnel surgery and this is getting my hand back to strength fast.
Helpful
Report
adonais d.
5.0 out of 5 stars
Están muy colada lo recomiendo
Reviewed in the United States on August 14, 2021
Verified Purchase
Me gusto muy suave para mis mano lo recomiendo
Helpful
Report
Translate review to English
stephanie D
5.0 out of 5 stars
I haven’t used the balls very long, but they seem to help pain.
Reviewed in the United States on April 1, 2020
Verified Purchase
I am using the exercise balls to relieve the arthritis in my hands. I have trigger fingers on both hands
and the exercise seems to help.
One person found this helpful
Helpful
Report
Customer 777
2.0 out of 5 stars
Easy to bite in half for child or dementia patient so be careful
Reviewed in the United States on November 16, 2022
Verified Purchase
Easy to bite Chunks out be careful not for children or confused elderly |
Please only use information from the provided PDF to answer this prompt. Do not act like you are an expert in legal affairs in any way. | Using the provided section of text below, please summarize the reasons for concurring and dissenting opinions. | Using the provided section of text below, please summarize the reasons for concurring and dissenting opinions.
Concurring and Dissenting Opinions
A majority of the Court—six Justices in total—wrote separately to concur or dissent, offering
their individual views on how the Second Amendment and the Bruen standard should be properly
interpreted both in this case and in future cases.
Justice Sotomayor’s concurring opinion, joined by Justice Kagan, expressed her continued view
that Bruen was wrongly decided and that a different legal standard should apply to Second
Amendment cases.73 She wrote separately to emphasize that when applying the Bruen historical
tradition standard, however, the majority’s methodology was the “right one.”74 In Justice
Sotomayor’s view, this is an “easy case,” as § 922(g)(8) is “wholly consistent” with historical
firearms regulations.75 By contrast, she criticized the dissenting view as too “rigid,”
characterizing it as “insist[ing] that the means of addressing that problem cannot be ‘materially
different’ from the means that existed in the eighteenth century,” which would unduly hamstring
modern policy efforts.76
In his concurring opinion, Justice Gorsuch underscored the difficulty in maintaining a facial
challenge to a law, which requires a showing that the law has no constitutional applications.77 He
also defended the Bruen historical tradition standard, arguing that the original meaning of the
Constitution, while “an imperfect guide,” provides proper constraints on judicial decisionmaking
and is better than unbounded alternatives such as an interest-balancing inquiry.78 Justice Gorsuch
also cautioned that the Court decided a narrow question—whether § 922(g)(3) “has any lawful
scope”—and that future defendants could argue that § 922(g)(3) was unconstitutional under
particular facts.79
69 Id. at 1901.
70 Id. at 1902.
71 Id.
72 Id. at 1903.
73 Id. at 1904 (Sotomayor, J., concurring).
74 Id.
75 Id.
76 Id. at 1905.
77 Id. at 1907 (Gorsuch, J., concurring).
78 Id. at 1909.
79 Id. at 1910.
Supreme Court Term October 2023: A Review of Selected Major Rulings
Congressional Research Service 8
Justice Kavanaugh concurred to expound his view on the roles of text, history, and precedent in
constitutional interpretation. He explained that unambiguous text controls and that history, rather
than policy, is a more neutral and principled guide for constitutional decisionmaking when the
text is unclear.80 Using historical examples, Justice Kavanaugh illustrated his view on how preand post-ratification history may inform the meaning of vague constitutional text.81 Next, he
argued that balancing tests in constitutional cases are a relatively recent development, generally
depart from tests centered on text and history, are inherently subjective, and should not be
extended to the Second Amendment arena.82 Finally, he opined that the majority’s opinion was
faithful to his perception of the appropriate roles of text, history, and precedent in constitutional
adjudication in this particular case.83
Justice Barrett wrote a concurring opinion to explain her understanding of the relationship
between Bruen’s historical tradition test and originalism as a method of constitutional
interpretation. In her view, historical tradition is a means to understand original meaning, and,
accordingly, historical practice around the time of ratification should be the focus of the legal
inquiry.84 In her view, history demonstrates that, “[s]ince the founding, our Nation’s firearm laws
have included provisions preventing individuals who threaten physical harm to others from
misusing firearms.” Justice Barrett agreed with the majority that § 922(g)(8) “fits well within that
principle.”85
Justice Jackson also wrote a concurring opinion, agreeing that the majority fairly applied Bruen
as precedent.86 She wrote separately to highlight what she perceived as problems with applying
the history-and-tradition standard in a workable manner.87 She argued that Rahimi illustrates the
“pitfalls of Bruen’s approach” by demonstrating the difficulty of sifting through the historical
record and determining whether historical evidence establishes a tradition of sufficiently
analogous regulation.88 The numerous unanswered questions that remain even after Rahimi, in her
view, result in “the Rule of Law suffer[ing].”89 Stating that legal standards should “foster stability,
facilitate consistency, and promote predictability,” Justice Jackson concluded by arguing that
“Bruen’s history-focused test ticks none of those boxes.”90
Justice Thomas was the sole dissenter. In his view, the historical examples cited by the majority
were not sufficient to establish a tradition of firearm regulation that justified § 922(g)(8).91
According to Justice Thomas, courts should look to two metrics to evaluate whether historical
examples of regulation are analogous to modern enactments: “how and why the regulations
burden a law-abiding citizen’s right to armed self-defense.”92 In his view, the two categories of
evidence proffered by the government—historical laws disarming “dangerous” individuals and
historical characterization of the right to bear arms as belonging only to “peaceable” citizens—
80 Id. at 1912 (Kavanaugh, J., concurring).
81 Id. at 1913–19.
82 Id. at 1921.
83 Id. at 1923.
84 Id. at 1924 (Barrett, J., concurring).
85 Id. at 1926 (quoting Rahimi, 144 S. Ct. at 1896 (majority opinion)).
86 Id. (Jackson, J., concurring).
87 Id. at 1928.
88 Id.
89 Id. at 1929.
90 Id.
91 Id. at 1930 (Thomas, J., dissenting).
92 Id. at 1931–32.
Supreme Court Term October 2023: A Review of Selected Major Rulings
Congressional Research Service 9
did not impose comparable burdens as § 922(g)(8).93 Justice Thomas argued that § 922(g)(8) was
enacted in response to “interpersonal violence,” whereas the historical English laws were
concerned with insurrection and rebellion.94 Ultimately, Rahimi could have been disarmed, in
Justice Thomas’s view, through criminal conviction but not through a restraining order.95 | Concurring and Dissenting Opinions
A majority of the Court—six Justices in total—wrote separately to concur or dissent, offering
their individual views on how the Second Amendment and the Bruen standard should be properly
interpreted both in this case and in future cases.
Justice Sotomayor’s concurring opinion, joined by Justice Kagan, expressed her continued view
that Bruen was wrongly decided and that a different legal standard should apply to Second
Amendment cases.73 She wrote separately to emphasize that when applying the Bruen historical
tradition standard, however, the majority’s methodology was the “right one.”74 In Justice
Sotomayor’s view, this is an “easy case,” as § 922(g)(8) is “wholly consistent” with historical
firearms regulations.75 By contrast, she criticized the dissenting view as too “rigid,”
characterizing it as “insist[ing] that the means of addressing that problem cannot be ‘materially
different’ from the means that existed in the eighteenth century,” which would unduly hamstring
modern policy efforts.76
In his concurring opinion, Justice Gorsuch underscored the difficulty in maintaining a facial
challenge to a law, which requires a showing that the law has no constitutional applications.77 He
also defended the Bruen historical tradition standard, arguing that the original meaning of the
Constitution, while “an imperfect guide,” provides proper constraints on judicial decisionmaking
and is better than unbounded alternatives such as an interest-balancing inquiry.78 Justice Gorsuch
also cautioned that the Court decided a narrow question—whether § 922(g)(3) “has any lawful
scope”—and that future defendants could argue that § 922(g)(3) was unconstitutional under
particular facts.79
69 Id. at 1901.
70 Id. at 1902.
71 Id.
72 Id. at 1903.
73 Id. at 1904 (Sotomayor, J., concurring).
74 Id.
75 Id.
76 Id. at 1905.
77 Id. at 1907 (Gorsuch, J., concurring).
78 Id. at 1909.
79 Id. at 1910.
Supreme Court Term October 2023: A Review of Selected Major Rulings
Congressional Research Service 8
Justice Kavanaugh concurred to expound his view on the roles of text, history, and precedent in
constitutional interpretation. He explained that unambiguous text controls and that history, rather
than policy, is a more neutral and principled guide for constitutional decisionmaking when the
text is unclear.80 Using historical examples, Justice Kavanaugh illustrated his view on how preand post-ratification history may inform the meaning of vague constitutional text.81 Next, he
argued that balancing tests in constitutional cases are a relatively recent development, generally
depart from tests centered on text and history, are inherently subjective, and should not be
extended to the Second Amendment arena.82 Finally, he opined that the majority’s opinion was
faithful to his perception of the appropriate roles of text, history, and precedent in constitutional
adjudication in this particular case.83
Justice Barrett wrote a concurring opinion to explain her understanding of the relationship
between Bruen’s historical tradition test and originalism as a method of constitutional
interpretation. In her view, historical tradition is a means to understand original meaning, and,
accordingly, historical practice around the time of ratification should be the focus of the legal
inquiry.84 In her view, history demonstrates that, “[s]ince the founding, our Nation’s firearm laws
have included provisions preventing individuals who threaten physical harm to others from
misusing firearms.” Justice Barrett agreed with the majority that § 922(g)(8) “fits well within that
principle.”85
Justice Jackson also wrote a concurring opinion, agreeing that the majority fairly applied Bruen
as precedent.86 She wrote separately to highlight what she perceived as problems with applying
the history-and-tradition standard in a workable manner.87 She argued that Rahimi illustrates the
“pitfalls of Bruen’s approach” by demonstrating the difficulty of sifting through the historical
record and determining whether historical evidence establishes a tradition of sufficiently
analogous regulation.88 The numerous unanswered questions that remain even after Rahimi, in her
view, result in “the Rule of Law suffer[ing].”89 Stating that legal standards should “foster stability,
facilitate consistency, and promote predictability,” Justice Jackson concluded by arguing that
“Bruen’s history-focused test ticks none of those boxes.”90
Justice Thomas was the sole dissenter. In his view, the historical examples cited by the majority
were not sufficient to establish a tradition of firearm regulation that justified § 922(g)(8).91
According to Justice Thomas, courts should look to two metrics to evaluate whether historical
examples of regulation are analogous to modern enactments: “how and why the regulations
burden a law-abiding citizen’s right to armed self-defense.”92 In his view, the two categories of
evidence proffered by the government—historical laws disarming “dangerous” individuals and
historical characterization of the right to bear arms as belonging only to “peaceable” citizens—
80 Id. at 1912 (Kavanaugh, J., concurring).
81 Id. at 1913–19.
82 Id. at 1921.
83 Id. at 1923.
84 Id. at 1924 (Barrett, J., concurring).
85 Id. at 1926 (quoting Rahimi, 144 S. Ct. at 1896 (majority opinion)).
86 Id. (Jackson, J., concurring).
87 Id. at 1928.
88 Id.
89 Id. at 1929.
90 Id.
91 Id. at 1930 (Thomas, J., dissenting).
92 Id. at 1931–32.
Supreme Court Term October 2023: A Review of Selected Major Rulings
Congressional Research Service 9
did not impose comparable burdens as § 922(g)(8).93 Justice Thomas argued that § 922(g)(8) was
enacted in response to “interpersonal violence,” whereas the historical English laws were
concerned with insurrection and rebellion.94 Ultimately, Rahimi could have been disarmed, in
Justice Thomas’s view, through criminal conviction but not through a restraining order.95 |
This task requires that you answer the following question based solely on the information provided in the prompt and context block. You are not allowed to use any external resources or prior knowledge. | How did the introduction of the 16-bit Intel 8088 microprocessor contribute to the rise of personal computers in mainstream business use? | The microprocessor, or CPU, as some people
call it, is the brains of our personal computer. I’m
getting into this history lesson not because I’m a
history buff (though computers do have a
wonderfully interesting past), but to go through
the development step-by-step to explain how
they work.
Well, not everything about how they work, but
enough to understand the importance of the latest features and what they do for you. It’s going to
take more than one article to dig into the inner secrets of microprocessors. I hope it’s an interesting
read for you and helps you recognize computer buzzwords when you’re making your next
computer purchase.
1. Where Did CPUs Come From?
When the 1970s dawned, computers were still monster machines hidden in
air-conditioned rooms and attended to by technicians in white lab coats. One
component of a mainframe computer, as they were known, was the CPU, or
Central Processing Unit. This was a steel cabinet bigger than a refrigerator full
of circuit boards crowded with transistors.
Computers had only recently been converted from vacuum tubes to transistors
and only the very latest machines used primitive integrated circuits where a
few transistors were gathered in one package. That means the CPU was a big
pile of equipment. The thought that the CPU could be reduced to a chip of
silicon the size of your fingernail was the stuff of science fiction.
2. How Does a CPU Work?
In the '40s, mathematicians John Von Neumann, J. Presper Eckert and John Mauchly came up
with the concept of the stored instruction digital computer. Before then, computers were
programmed by rewiring their circuits to perform a certain calculation over and over. By having a
memory and storing a set of instructions that can be performed over and over, as well as logic to
vary the path of instruction, execution programmable computers were possible.
The component of the computer that fetches the instructions and data from the memory and
carries out the instructions in the form of data manipulation and numerical calculations is called the
CPU. It’s central because all the memory and the input/output devices must connect to the CPU,
so it’s only natural to keep the cables short to put the CPU in the middle. It does all the instruction
execution and number calculations so it’s called the Processing Unit.
The CPU has a program counter that points to the next instruction to be executed. It goes through
a cycle where it retrieves, from memory, the instructions in the program counter. It then retrieves
the required data from memory, performs the calculation indicated by the instruction and stores the
result. The program counter is incremented to point to the next instruction and the cycle starts all
over.
3. The First Microprocessor
In 1971 when the heavy iron mainframe computers still ruled, a small Silicon Valley company was
contracted to design an integrated circuit for a business calculator for Busicom. Instead of
hardwired calculations like other calculator chips of the day, this one was designed as a tiny CPU
that could be programmed to perform almost any calculation.
The expensive and time-consuming work of designing a custom wired
chip was replaced by the flexible 4004 microprocessor and the
instructions stored in a separate ROM (Read Only
Memory) chip. A new calculator with entirely new
features can be created simply by programming a new
ROM chip. The company that started this revolution was
Intel Corporation. The concept of a general purpose
CPU chip grew up to be the microprocessor that is the heart of your powerful PC.
4. 4 Bits Isn’t Enough
The original 4004 microprocessor chip handled data in
four bit chunks. Four bits gives you sixteen possible
numbers, enough to handle standard decimal arithmetic
for a calculator. If it were only the size of the numbers
we calculate with, we might still be using four bit
microprocessors.
The problem is that there is another form of calculation a stored
instruction computer needs to do. That is it has to figure out where in
memory instructions are. In other words, it has to calculate memory
locations to process program branch instructions or to index into tables
of data.
Like I said, four bits only gets you sixteen possibilities and even the 4004 needed to address 640
bytes of memory to handle calculator functions. Modern microprocessor chips like the Intel
Pentium 4 can address 18,446,744,073,709,551,616 bytes of memory, though the motherboard is
limited to less than this total. This led to the push for more bits in our microprocessors. We are now
on the fence between 32 bit microprocessors and 64 bit monsters like the AMD Athlon 64.
5. The First Step Up, 8 Bits
With a total memory address space of 640 bytes, the Intel 4004 chip was
not the first microprocessor to be the starting point for a personal
computer. In 1972, Intel delivered the 8008, a scaled up 4004. The 8008
was the first of many 8- bit microprocessors to fuel the home computer
revolution. It was limited to only 16 Kilobytes of address space, but in
those days no one could afford that much RAM.
Two years later, Intel introduced the 8080 microprocessor with 64 Kilobytes of
memory space and increased the rate of execution by a factor of ten over the 8008.
About this time, Motorola brought out the 6800 with similar performance. The 8080
became the core of serious microcomputers that led to the Intel 8088 used in the IBM
PC, while the 6800 family headed in the direction of the Apple II personal computer.
6. 16 Bits Enables the IBM PC
By the late '70s, the personal computer was bursting at the seams of the 8 bit
microprocessor performance. In 1979, Intel delivered the 8088 and IBM engineers
used it for the first PC. The combination of the new 16 bit microprocessor and the name IBM
shifted the personal computer from a techie toy in the garage to a mainstream business tool.
The major advantage of the 8086 was up to 1 Megabyte of memory addressing.
Now, large spreadsheets or large documents could be read in from the disk and
held in RAM memory for fast access and manipulation. These days, it’s not
uncommon to have a thousand times more than that in a single 1 Gigabyte RAM
Module, but back in that time it put the IBM PC in the same league with minicomputers the size of
a refrigerator.
7. Cache RAM, Catching Up With the CPU
We’ll have to continue the march through the
lineup of microprocessors in the next
installment to make way for the first of the
enhancements that you should understand.
With memory space expanding and the speed
of microprocessor cores going ever faster,
there was a problem of the memory keeping
up.
Large low-powered memories cannot go as fast
as smaller higher power RAM chips. To keep
the fastest CPUs running full speed,
microprocessor engineers started inserting a
few of the fast and small memories between the main large RAM and the microprocessor. The
purpose of this smaller memory is to hold instructions that get repeatedly executed or data that is
accessed often.
This smaller memory is called cache RAM and allows the microprocessor
to execute at full speed. Naturally, the larger the cache RAM the higher
percentage of cache hits and the microprocessor can continue running full
speed. When the program execution leads to instructions not in the cache,
then the instructions need to be fetched from the main memory and the
microprocessor has to stop and wait.
8. Cache Grows Up
The idea of cache RAM has grown along with the size and
complexity of microprocessor chips. A high-end Pentium 4has 2
Megabytes of cache RAM built into the chip. That’s more than
twice the entire memory address space of the original 8088 chip
used in the first PC and clones. Putting the cache right on the
microprocessor itself removes the slowdown of the wires between
chips. You know you are going fast when the speed of light for a
few inches makes a difference! | System instruction: [This task requires that you answer the following question based solely on the information provided in the prompt and context block. You are not allowed to use any external resources or prior knowledge. ]
Question: [How did the introduction of the 16-bit Intel 8088 microprocessor contribute to the rise of personal computers in mainstream business use?]
Context: [The microprocessor, or CPU, as some people
call it, is the brains of our personal computer. I’m
getting into this history lesson not because I’m a
history buff (though computers do have a
wonderfully interesting past), but to go through
the development step-by-step to explain how
they work.
Well, not everything about how they work, but
enough to understand the importance of the latest features and what they do for you. It’s going to
take more than one article to dig into the inner secrets of microprocessors. I hope it’s an interesting
read for you and helps you recognize computer buzzwords when you’re making your next
computer purchase.
1. Where Did CPUs Come From?
When the 1970s dawned, computers were still monster machines hidden in
air-conditioned rooms and attended to by technicians in white lab coats. One
component of a mainframe computer, as they were known, was the CPU, or
Central Processing Unit. This was a steel cabinet bigger than a refrigerator full
of circuit boards crowded with transistors.
Computers had only recently been converted from vacuum tubes to transistors
and only the very latest machines used primitive integrated circuits where a
few transistors were gathered in one package. That means the CPU was a big
pile of equipment. The thought that the CPU could be reduced to a chip of
silicon the size of your fingernail was the stuff of science fiction.
2. How Does a CPU Work?
In the '40s, mathematicians John Von Neumann, J. Presper Eckert and John Mauchly came up
with the concept of the stored instruction digital computer. Before then, computers were
programmed by rewiring their circuits to perform a certain calculation over and over. By having a
memory and storing a set of instructions that can be performed over and over, as well as logic to
vary the path of instruction, execution programmable computers were possible.
The component of the computer that fetches the instructions and data from the memory and
carries out the instructions in the form of data manipulation and numerical calculations is called the
CPU. It’s central because all the memory and the input/output devices must connect to the CPU,
so it’s only natural to keep the cables short to put the CPU in the middle. It does all the instruction
execution and number calculations so it’s called the Processing Unit.
The CPU has a program counter that points to the next instruction to be executed. It goes through
a cycle where it retrieves, from memory, the instructions in the program counter. It then retrieves
the required data from memory, performs the calculation indicated by the instruction and stores the
result. The program counter is incremented to point to the next instruction and the cycle starts all
over.
3. The First Microprocessor
In 1971 when the heavy iron mainframe computers still ruled, a small Silicon Valley company was
contracted to design an integrated circuit for a business calculator for Busicom. Instead of
hardwired calculations like other calculator chips of the day, this one was designed as a tiny CPU
that could be programmed to perform almost any calculation.
The expensive and time-consuming work of designing a custom wired
chip was replaced by the flexible 4004 microprocessor and the
instructions stored in a separate ROM (Read Only
Memory) chip. A new calculator with entirely new
features can be created simply by programming a new
ROM chip. The company that started this revolution was
Intel Corporation. The concept of a general purpose
CPU chip grew up to be the microprocessor that is the heart of your powerful PC.
4. 4 Bits Isn’t Enough
The original 4004 microprocessor chip handled data in
four bit chunks. Four bits gives you sixteen possible
numbers, enough to handle standard decimal arithmetic
for a calculator. If it were only the size of the numbers
we calculate with, we might still be using four bit
microprocessors.
The problem is that there is another form of calculation a stored
instruction computer needs to do. That is it has to figure out where in
memory instructions are. In other words, it has to calculate memory
locations to process program branch instructions or to index into tables
of data.
Like I said, four bits only gets you sixteen possibilities and even the 4004 needed to address 640
bytes of memory to handle calculator functions. Modern microprocessor chips like the Intel
Pentium 4 can address 18,446,744,073,709,551,616 bytes of memory, though the motherboard is
limited to less than this total. This led to the push for more bits in our microprocessors. We are now
on the fence between 32 bit microprocessors and 64 bit monsters like the AMD Athlon 64.
5. The First Step Up, 8 Bits
With a total memory address space of 640 bytes, the Intel 4004 chip was
not the first microprocessor to be the starting point for a personal
computer. In 1972, Intel delivered the 8008, a scaled up 4004. The 8008
was the first of many 8- bit microprocessors to fuel the home computer
revolution. It was limited to only 16 Kilobytes of address space, but in
those days no one could afford that much RAM.
Two years later, Intel introduced the 8080 microprocessor with 64 Kilobytes of
memory space and increased the rate of execution by a factor of ten over the 8008.
About this time, Motorola brought out the 6800 with similar performance. The 8080
became the core of serious microcomputers that led to the Intel 8088 used in the IBM
PC, while the 6800 family headed in the direction of the Apple II personal computer.
6. 16 Bits Enables the IBM PC
By the late '70s, the personal computer was bursting at the seams of the 8 bit
microprocessor performance. In 1979, Intel delivered the 8088 and IBM engineers
used it for the first PC. The combination of the new 16 bit microprocessor and the name IBM
shifted the personal computer from a techie toy in the garage to a mainstream business tool.
The major advantage of the 8086 was up to 1 Megabyte of memory addressing.
Now, large spreadsheets or large documents could be read in from the disk and
held in RAM memory for fast access and manipulation. These days, it’s not
uncommon to have a thousand times more than that in a single 1 Gigabyte RAM
Module, but back in that time it put the IBM PC in the same league with minicomputers the size of
a refrigerator.
7. Cache RAM, Catching Up With the CPU
We’ll have to continue the march through the
lineup of microprocessors in the next
installment to make way for the first of the
enhancements that you should understand.
With memory space expanding and the speed
of microprocessor cores going ever faster,
there was a problem of the memory keeping
up.
Large low-powered memories cannot go as fast
as smaller higher power RAM chips. To keep
the fastest CPUs running full speed,
microprocessor engineers started inserting a
few of the fast and small memories between the main large RAM and the microprocessor. The
purpose of this smaller memory is to hold instructions that get repeatedly executed or data that is
accessed often.
This smaller memory is called cache RAM and allows the microprocessor
to execute at full speed. Naturally, the larger the cache RAM the higher
percentage of cache hits and the microprocessor can continue running full
speed. When the program execution leads to instructions not in the cache,
then the instructions need to be fetched from the main memory and the
microprocessor has to stop and wait.
8. Cache Grows Up
The idea of cache RAM has grown along with the size and
complexity of microprocessor chips. A high-end Pentium 4has 2
Megabytes of cache RAM built into the chip. That’s more than
twice the entire memory address space of the original 8088 chip
used in the first PC and clones. Putting the cache right on the
microprocessor itself removes the slowdown of the wires between
chips. You know you are going fast when the speed of light for a
few inches makes a difference!] |
You can only respond to the prompt using the information in the context block and no other sources. | Write a summary of all of the benefits and concerns of artificial intelligence development and use. | In recent years, the Administration and Congress have been increasingly engaged in supporting
artificial intelligence R&D and working to address policy concerns arising from AI development
and use. Congressional activities focused on AI increased substantially in the 116th and 117th
Congresses, including multiple committee hearings in the House and Senate, the introduction of
numerous AI-focused bills, and the passage of AI provisions in legislation. Enacted legislation
has included the National AI Initiative Act of 2020 within the William M. (Mac) Thornberry
National Defense Authorization Act for Fiscal Year 2021 (P.L. 116-283); the AI in Government
Act of 2020 within the Consolidated Appropriations Act, 2021 (P.L. 116-260); and provisions
focused on AI activities at NSF, DOE, and NIST within P.L. 117-167, the CHIPS and Science
Act.
AI holds potential benefits and opportunities, such as through augmenting human decisionmaking
and optimizing performance for complex tasks. It also presents challenges and pitfalls, such as
through perpetuating or amplifying bias and failing in unexpected ways. The ready availability in
2022 of software (i.e., ChatGPT) that can intelligently (1) respond to questions, and (2) draft
prose documents may represent a sentinel event in popular use of AI.
There are several broad concerns related to AI, spanning multiple sectors, that could be
considered in the 118th Congress. These include
Congressional Research Service
45
Science and Technology Issues for the 118th Congress
• the impact of AI and AI-driven automation on the workforce, including potential
job losses and the need for worker retraining;
• the challenges of educating students in AI, from teaching foundational concepts
at the K-12 level to supporting doctoral-level training to meet increasing demand
for AI expertise;
• the balance of federal and private sector funding for AI;
• whether and how to increase access to public datasets to train AI systems for use
in the public and private sectors;
• the development of standards and testing protocols and algorithmic auditing
capabilities for AI systems;
• the need for and effectiveness of federal and international coordination efforts in
AI, as well as concerns over international competition in AI R&D and
deployment; and
• the incorporation of ethics, privacy, security, transparency, and accountability
considerations in AI systems, including such applications as facial recognition
technologies.
There are additional national security concerns about the potential use of AI technologies that
Congress could address, such as the potential for “deep fakes” to influence elections and erode
public trust, the balance of human and automated decisionmaking in military operations, and
concerns about the dissemination of U.S.-developed AI technologies and federally funded AI
research results to potential competitors or adversaries. | You can only respond to the prompt using the information in the context block and no other sources. Write a summary of all of the benefits and concerns of artificial intelligence development and use.
In recent years, the Administration and Congress have been increasingly engaged in supporting artificial intelligence R&D and working to address policy concerns arising from AI development and use. Congressional activities focused on AI increased substantially in the 116th and 117th Congresses, including multiple committee hearings in the House and Senate, the introduction of numerous AI-focused bills, and the passage of AI provisions in legislation. Enacted legislation has included the National AI Initiative Act of 2020 within the William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021 (P.L. 116-283); the AI in Government Act of 2020 within the Consolidated Appropriations Act, 2021 (P.L. 116-260); and provisions focused on AI activities at NSF, DOE, and NIST within P.L. 117-167, the CHIPS and Science Act.
AI holds potential benefits and opportunities, such as through augmenting human decisionmaking and optimizing performance for complex tasks. It also presents challenges and pitfalls, such as through perpetuating or amplifying bias and failing in unexpected ways. The ready availability in 2022 of software (i.e., ChatGPT) that can intelligently (1) respond to questions, and (2) draft prose documents may represent a sentinel event in popular use of AI.
There are several broad concerns related to AI, spanning multiple sectors, that could be
considered in the 118th Congress. These include Congressional Research Service
45 Science and Technology Issues for the 118th Congress
• the impact of AI and AI-driven automation on the workforce, including potential
job losses and the need for worker retraining;
• the challenges of educating students in AI, from teaching foundational concepts
at the K-12 level to supporting doctoral-level training to meet increasing demand
for AI expertise;
• the balance of federal and private sector funding for AI;
• whether and how to increase access to public datasets to train AI systems for use
in the public and private sectors;
• the development of standards and testing protocols and algorithmic auditing
capabilities for AI systems;
• the need for and effectiveness of federal and international coordination efforts in
AI, as well as concerns over international competition in AI R&D and
deployment; and
• the incorporation of ethics, privacy, security, transparency, and accountability
considerations in AI systems, including such applications as facial recognition
technologies. There are additional national security concerns about the potential use of AI technologies that Congress could address, such as the potential for “deep fakes” to influence elections and erode public trust, the balance of human and automated decisionmaking in military operations, and concerns about the dissemination of U.S.-developed AI technologies and federally funded AI research results to potential competitors or adversaries. |
You must draw your answer from the below text only. You must not use any outside resources or prior knowledge. Limit your answer to 100 words or fewer. | What is the deeming rule? | Circuit Split over the Food and Drug
Administration’s Denial of Applications
Seeking to Market Flavored E-Cigarettes, Part
1 of 2
April 5, 2024
Electronic nicotine delivery system (ENDS) products—products that go by many common names, such as
e-cigarettes and vape pens—are generally required to receive prior authorization from the Food and Drug
Administration (FDA) before they can be lawfully marketed in the United States. Before FDA issued
regulations in 2016 to subject these products to the premarket review process, however, many of them
were already being sold on the U.S. market and were allowed to remain there while FDA implemented the
application and review process. These products come in a variety of forms and flavors, from tobacco and
menthol flavors based on the flavors of traditional combustible cigarettes to other flavors based on the
flavors of fruit, candy, and other sweets (“flavored ENDS products”). While limited studies of certain
ENDS products show that they contain substantially lower levels of toxins than combustible cigarettes,
indicating a benefit to current adult smokers who switch completely to using ENDS products, flavored
ENDS products have been shown to be particularly attractive to youth. In a 2016-2017 study, for instance,
93.2% of youth ENDS product users reported that their first use was with a flavored product. In 2018, the
Surgeon General issued an advisory on the “e-cigarette epidemic among youth.”
Since the initial deadline in September 2020 for ENDS product manufacturers to submit their premarket
tobacco product applications (PMTAs), FDA has received millions of applications for ENDS products. To
date, the agency has authorized 23 tobacco-flavored ENDS products for lawful marketing and has not
authorized any flavored ENDS products. Many applicants that have received a marketing denial order
(MDO) for their flavored ENDS products have filed petitions in U.S. Courts of Appeals throughout the
country to challenge the denial of their PMTAs. Of the courts that have considered these petitions, the
Second, Third, Fourth, Sixth, Seventh, Ninth, Tenth, and D.C. Circuits have sided with FDA and denied
the petitions or requests to stay the agency’s MDOs. The Eleventh and Fifth Circuits, on the other hand,
have sided with the ENDS manufacturers and vacated FDA’s MDOs, remanding the applications to FDA
for reconsideration. This circuit split sets the stage for potential Supreme Court review regarding what
information FDA may require applicants seeking to market flavored ENDS products to provide as part of
Congressional Research Service
https://crsreports.congress.gov
LSB11141
Congressional Research Service 2
their PMTAs. This two-part Sidebar examines the circuit split. Part I provides an overview of the Family
Smoking Prevention and Tobacco Control Act (TCA) regulatory framework, relevant FDA actions related
to ENDS products, and the agency’s review and denial of the PMTAs involving flavored ENDS products.
Part II provides an overview of the litigation challenging those FDA orders, the court decisions to date,
and certain preliminary observations for consideration by Congress.
Background on TCA’s Statutory Framework
In 2009, Congress enacted the TCA, which established the central federal regulatory regime for the
manufacture, marketing, and distribution of tobacco products. Among other things, the TCA required all
new tobacco products—that is, those not commercially marketed in the United States prior to February
15, 2007—to receive prior authorization from FDA before they can be marketed to the public. In
establishing this regulatory regime, the TCA aims to balance competing interests in protecting the public’s
health against the harmful effects of smoking and youth tobacco use, while preserving access to lawfully
marketed tobacco products for adult consumers. To further this goal, the TCA grants FDA “primary
Federal regulatory authority” over tobacco products and establishes a premarket review process for new
tobacco products. Such products generally may not be marketed until the manufacturer submits a PMTA
and receives a marketing granted order (MGO) from the Center for Tobacco Products, established within
FDA to implement the TCA.
The TCA permits FDA to issue an MGO only upon certain findings, including a conclusion that
“permitting such tobacco product to be marketed would be appropriate for the protection of the public
health,” or APPH. This APPH determination must be made “with respect to the risks and benefits to the
population as a whole, including users and nonusers of the tobacco product,” taking into account the
likelihood that existing users of tobacco products will stop using such products and the likelihood that
those who do not use such products will start using them. The TCA directs FDA, in making this
evaluation, to consult a range of evidence, including “information submitted to the Secretary as part of the
[PMTA] and any other information before the Secretary with respect to such tobacco product.” Such
information may include “when appropriate . . . well-controlled investigations, which may include 1 or
more clinical investigations by experts qualified by training and experience to evaluate the tobacco
product,” as well as other “valid scientific evidence” determined by the Secretary to be sufficient to
evaluate the tobacco product.
While the TCA explicitly applies to cigarettes, cigarette tobacco, roll-your-own tobacco, and smokeless
tobacco, the statute also authorizes FDA to deem other tobacco products subject to the law. In 2016, FDA
invoked this authority and promulgated what is known as the Deeming Rule, which subjected ENDS
products to the TCA’s regulatory regime.
| You must draw your answer from the below text only. You must not use any outside resources or prior knowledge. Limit your answer to 100 words or fewer.
Circuit Split over the Food and Drug
Administration’s Denial of Applications
Seeking to Market Flavored E-Cigarettes, Part
1 of 2
April 5, 2024
Electronic nicotine delivery system (ENDS) products—products that go by many common names, such as
e-cigarettes and vape pens—are generally required to receive prior authorization from the Food and Drug
Administration (FDA) before they can be lawfully marketed in the United States. Before FDA issued
regulations in 2016 to subject these products to the premarket review process, however, many of them
were already being sold on the U.S. market and were allowed to remain there while FDA implemented the
application and review process. These products come in a variety of forms and flavors, from tobacco and
menthol flavors based on the flavors of traditional combustible cigarettes to other flavors based on the
flavors of fruit, candy, and other sweets (“flavored ENDS products”). While limited studies of certain
ENDS products show that they contain substantially lower levels of toxins than combustible cigarettes,
indicating a benefit to current adult smokers who switch completely to using ENDS products, flavored
ENDS products have been shown to be particularly attractive to youth. In a 2016-2017 study, for instance,
93.2% of youth ENDS product users reported that their first use was with a flavored product. In 2018, the
Surgeon General issued an advisory on the “e-cigarette epidemic among youth.”
Since the initial deadline in September 2020 for ENDS product manufacturers to submit their premarket
tobacco product applications (PMTAs), FDA has received millions of applications for ENDS products. To
date, the agency has authorized 23 tobacco-flavored ENDS products for lawful marketing and has not
authorized any flavored ENDS products. Many applicants that have received a marketing denial order
(MDO) for their flavored ENDS products have filed petitions in U.S. Courts of Appeals throughout the
country to challenge the denial of their PMTAs. Of the courts that have considered these petitions, the
Second, Third, Fourth, Sixth, Seventh, Ninth, Tenth, and D.C. Circuits have sided with FDA and denied
the petitions or requests to stay the agency’s MDOs. The Eleventh and Fifth Circuits, on the other hand,
have sided with the ENDS manufacturers and vacated FDA’s MDOs, remanding the applications to FDA
for reconsideration. This circuit split sets the stage for potential Supreme Court review regarding what
information FDA may require applicants seeking to market flavored ENDS products to provide as part of
Congressional Research Service
https://crsreports.congress.gov
LSB11141
Congressional Research Service 2
their PMTAs. This two-part Sidebar examines the circuit split. Part I provides an overview of the Family
Smoking Prevention and Tobacco Control Act (TCA) regulatory framework, relevant FDA actions related
to ENDS products, and the agency’s review and denial of the PMTAs involving flavored ENDS products.
Part II provides an overview of the litigation challenging those FDA orders, the court decisions to date,
and certain preliminary observations for consideration by Congress.
Background on TCA’s Statutory Framework
In 2009, Congress enacted the TCA, which established the central federal regulatory regime for the
manufacture, marketing, and distribution of tobacco products. Among other things, the TCA required all
new tobacco products—that is, those not commercially marketed in the United States prior to February
15, 2007—to receive prior authorization from FDA before they can be marketed to the public. In
establishing this regulatory regime, the TCA aims to balance competing interests in protecting the public’s
health against the harmful effects of smoking and youth tobacco use, while preserving access to lawfully
marketed tobacco products for adult consumers. To further this goal, the TCA grants FDA “primary
Federal regulatory authority” over tobacco products and establishes a premarket review process for new
tobacco products. Such products generally may not be marketed until the manufacturer submits a PMTA
and receives a marketing granted order (MGO) from the Center for Tobacco Products, established within
FDA to implement the TCA.
The TCA permits FDA to issue an MGO only upon certain findings, including a conclusion that
“permitting such tobacco product to be marketed would be appropriate for the protection of the public
health,” or APPH. This APPH determination must be made “with respect to the risks and benefits to the
population as a whole, including users and nonusers of the tobacco product,” taking into account the
likelihood that existing users of tobacco products will stop using such products and the likelihood that
those who do not use such products will start using them. The TCA directs FDA, in making this
evaluation, to consult a range of evidence, including “information submitted to the Secretary as part of the
[PMTA] and any other information before the Secretary with respect to such tobacco product.” Such
information may include “when appropriate . . . well-controlled investigations, which may include 1 or
more clinical investigations by experts qualified by training and experience to evaluate the tobacco
product,” as well as other “valid scientific evidence” determined by the Secretary to be sufficient to
evaluate the tobacco product.
While the TCA explicitly applies to cigarettes, cigarette tobacco, roll-your-own tobacco, and smokeless
tobacco, the statute also authorizes FDA to deem other tobacco products subject to the law. In 2016, FDA
invoked this authority and promulgated what is known as the Deeming Rule, which subjected ENDS
products to the TCA’s regulatory regime.
QUESTION
What is the deeming rule? |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | Currently learning about IoT it implementations, use cases and so on. How it impact our lives and industries. What industries, technologies implement this and what benefits we receive from them? What are some examples where this technology is used? Explain in less than 500 words. | What is the Internet of Things (IoT)?
The term IoT, or Internet of Things, refers to the collective network of connected devices and the technology that facilitates communication between devices and the cloud, as well as between the devices themselves. Thanks to the advent of inexpensive computer chips and high bandwidth telecommunication, we now have billions of devices connected to the internet. This means everyday devices like toothbrushes, vacuums, cars, and machines can use sensors to collect data and respond intelligently to users.
The Internet of Things integrates everyday “things” with the internet. Computer Engineers have been adding sensors and processors to everyday objects since the 90s. However, progress was initially slow because the chips were big and bulky. Low power computer chips called RFID tags were first used to track expensive equipment. As computing devices shrank in size, these chips also became smaller, faster, and smarter over time.
The cost of integrating computing power into small objects has now dropped considerably. For example, you can add connectivity with Alexa voice services capabilities to MCUs with less than 1MB embedded RAM, such as for light switches. A whole industry has sprung up with a focus on filling our homes, businesses, and offices with IoT devices. These smart objects can automatically transmit data to and from the Internet. All these “invisible computing devices” and the technology associated with them are collectively referred to as the Internet of Things.
How does IoT work?
A typical IoT system works through the real-time collection and exchange of data. An IoT system has three components:
Smart devices
This is a device, like a television, security camera, or exercise equipment that has been given computing capabilities. It collects data from its environment, user inputs, or usage patterns and communicates data over the internet to and from its IoT application.
IoT application
An IoT application is a collection of services and software that integrates data received from various IoT devices. It uses machine learning or artificial intelligence (AI) technology to analyze this data and make informed decisions. These decisions are communicated back to the IoT device and the IoT device then responds intelligently to inputs.
A graphical user interface
The IoT device or fleet of devices can be managed through a graphical user interface. Common examples include a mobile application or website that can be used to register and control smart devices.
What are examples of IoT devices?
Let’s look at some examples of IoT systems in use today:
Connected cars
There are many ways vehicles, such as cars, can be connected to the internet. It can be through smart dashcams, infotainment systems, or even the vehicle's connected gateway. They collect data from the accelerator, brakes, speedometer, odometer, wheels, and fuel tanks to monitor both driver performance and vehicle health. Connected cars have a range of uses:
Monitoring rental car fleets to increase fuel efficiency and reduce costs.
Helping parents track the driving behavior of their children.
Notifying friends and family automatically in case of a car crash.
Predicting and preventing vehicle maintenance needs.
Connected homes
Smart home devices are mainly focused on improving the efficiency and safety of the house, as well as improving home networking. Devices like smart outlets monitor electricity usage and smart thermostats provide better temperature control. Hydroponic systems can use IoT sensors to manage the garden while IoT smoke detectors can detect tobacco smoke. Home security systems like door locks, security cameras, and water leak detectors can detect and prevent threats, and send alerts to homeowners.
Connected devices for the home can be used for:
Automatically turning off devices not being used.
Rental property management and maintenance.
Finding misplaced items like keys or wallets.
Automating daily tasks like vacuuming, making coffee, etc.
Smart cities
IoT applications have made urban planning and infrastructure maintenance more efficient. Governments are using IoT applications to tackle problems in infrastructure, health, and the environment. IoT applications can be used for:
Measuring air quality and radiation levels.
Reducing energy bills with smart lighting systems.
Detecting maintenance needs for critical infrastructures such as streets, bridges, and pipelines.
Increasing profits through efficient parking management.
Smart buildings
Buildings such as college campuses and commercial buildings use IoT applications to drive greater operational efficiencies. IoT devices can be use in smart buildings for:
Reducing energy consumption.
Lowering maintenance costs.
Utilizing work spaces more efficiently.
What is Industrial IoT?
Industrial IoT (IIoT) refers to smart devices used in manufacturing, retail, health, and other enterprises to create business efficiencies. Industrial devices, from sensors to equipment, give business owners detailed, real-time data that can be used to improve business processes. They provide insights on supply chain management, logistics, human resource, and production – decreasing costs and increasing revenue streams.
Let’s look at existing smart industrial systems in different verticals:
Manufacturing
Enterprise IoT in manufacturing uses predictive maintenance to reduce unplanned downtime and wearable technology to improve worker safety. IoT applications can predict machine failure before it happens, reducing production downtime. Wearables in helmets and wristbands, as well as computer vision cameras, are used to warn workers about potential hazards.
Automobile
Sensor-driven analytics and robotics increase efficiency in automobile manufacturing and maintenance. For example, industrial sensors are used to provide 3D real-time images of internal vehicle components. Diagnostics and troubleshooting can be done much faster while the IoT system orders replacement parts automatically.
Logistics and transport
Commercial and Industrial IoT devices can help with supply chain management, including inventory management, vendor relationships, fleet management, and scheduled maintenance. Shipping companies use Industrial IoT applications to keep track of assets and optimize fuel consumption on shipping routes. The technology is especially useful for tight temperature control in refrigerated containers. Supply chain managers make informed predictions through smart routing and rerouting algorithms.
Retail
Amazon is driving innovation in automation and human-machine collaboration in retail. Amazon facilities make use of internet-connected robots for tracking, locating, sorting, and moving products.
How can IoT improve our lives?
The Internet of Things has a wide-ranging impact on human life and work. It allows machines to do more heavy lifting, take over tedious tasks and make life more healthy, productive, and comfortable.
For example, connected devices could change your entire morning routine. When you hit the snooze button, your alarm clock would automatically get the coffee machine to turn on and open your window blinds. Your refrigerator would auto-detect finishing groceries and order them for home delivery. Your smart oven would tell you the menu for the day — it might even cook pre-assembled ingredients and make sure your lunch is ready. Your smartwatch will schedule meetings as your connected car automatically sets the GPS to stop for a fuel refill. The opportunities are endless in an IoT world!
What are the benefits of IoT for business?
Accelerate innovation
The Internet of Things gives businesses access to advanced analytics that uncover new opportunities. For example, businesses can create highly targeted advertising campaigns by collecting data on customer behavior.
Turn data into insights and actions with AI and ML
Collected data and historical trends can be used to predict future outcomes. For example, warranty information can be paired with IoT-collected data to predict maintenance incidents. This can be used to proactively provide customer service and build customer loyalty.
Increase security
Continuous monitoring of digital and physical infrastructure can optimize performance, improve efficiency and reduce safety risks. For example, data collected from an onsite monitor can be combined with hardware and firmware version data to automatically schedule system updates.
Scale differentiated solutions
IoT technologies can be deployed in a customer focused way to increase satisfaction. For example, trending products can be restocked promptly to avoid shortages. | [question]
Currently learning about IoT it implementations, use cases and so on. How it impact our lives and industries. What industries, technologies implement this and what benefits we receive from them? What are some examples where this technology is used? Explain in less than 500 words.
=====================
[text]
What is the Internet of Things (IoT)?
The term IoT, or Internet of Things, refers to the collective network of connected devices and the technology that facilitates communication between devices and the cloud, as well as between the devices themselves. Thanks to the advent of inexpensive computer chips and high bandwidth telecommunication, we now have billions of devices connected to the internet. This means everyday devices like toothbrushes, vacuums, cars, and machines can use sensors to collect data and respond intelligently to users.
The Internet of Things integrates everyday “things” with the internet. Computer Engineers have been adding sensors and processors to everyday objects since the 90s. However, progress was initially slow because the chips were big and bulky. Low power computer chips called RFID tags were first used to track expensive equipment. As computing devices shrank in size, these chips also became smaller, faster, and smarter over time.
The cost of integrating computing power into small objects has now dropped considerably. For example, you can add connectivity with Alexa voice services capabilities to MCUs with less than 1MB embedded RAM, such as for light switches. A whole industry has sprung up with a focus on filling our homes, businesses, and offices with IoT devices. These smart objects can automatically transmit data to and from the Internet. All these “invisible computing devices” and the technology associated with them are collectively referred to as the Internet of Things.
How does IoT work?
A typical IoT system works through the real-time collection and exchange of data. An IoT system has three components:
Smart devices
This is a device, like a television, security camera, or exercise equipment that has been given computing capabilities. It collects data from its environment, user inputs, or usage patterns and communicates data over the internet to and from its IoT application.
IoT application
An IoT application is a collection of services and software that integrates data received from various IoT devices. It uses machine learning or artificial intelligence (AI) technology to analyze this data and make informed decisions. These decisions are communicated back to the IoT device and the IoT device then responds intelligently to inputs.
A graphical user interface
The IoT device or fleet of devices can be managed through a graphical user interface. Common examples include a mobile application or website that can be used to register and control smart devices.
What are examples of IoT devices?
Let’s look at some examples of IoT systems in use today:
Connected cars
There are many ways vehicles, such as cars, can be connected to the internet. It can be through smart dashcams, infotainment systems, or even the vehicle's connected gateway. They collect data from the accelerator, brakes, speedometer, odometer, wheels, and fuel tanks to monitor both driver performance and vehicle health. Connected cars have a range of uses:
Monitoring rental car fleets to increase fuel efficiency and reduce costs.
Helping parents track the driving behavior of their children.
Notifying friends and family automatically in case of a car crash.
Predicting and preventing vehicle maintenance needs.
Connected homes
Smart home devices are mainly focused on improving the efficiency and safety of the house, as well as improving home networking. Devices like smart outlets monitor electricity usage and smart thermostats provide better temperature control. Hydroponic systems can use IoT sensors to manage the garden while IoT smoke detectors can detect tobacco smoke. Home security systems like door locks, security cameras, and water leak detectors can detect and prevent threats, and send alerts to homeowners.
Connected devices for the home can be used for:
Automatically turning off devices not being used.
Rental property management and maintenance.
Finding misplaced items like keys or wallets.
Automating daily tasks like vacuuming, making coffee, etc.
Smart cities
IoT applications have made urban planning and infrastructure maintenance more efficient. Governments are using IoT applications to tackle problems in infrastructure, health, and the environment. IoT applications can be used for:
Measuring air quality and radiation levels.
Reducing energy bills with smart lighting systems.
Detecting maintenance needs for critical infrastructures such as streets, bridges, and pipelines.
Increasing profits through efficient parking management.
Smart buildings
Buildings such as college campuses and commercial buildings use IoT applications to drive greater operational efficiencies. IoT devices can be use in smart buildings for:
Reducing energy consumption.
Lowering maintenance costs.
Utilizing work spaces more efficiently.
What is Industrial IoT?
Industrial IoT (IIoT) refers to smart devices used in manufacturing, retail, health, and other enterprises to create business efficiencies. Industrial devices, from sensors to equipment, give business owners detailed, real-time data that can be used to improve business processes. They provide insights on supply chain management, logistics, human resource, and production – decreasing costs and increasing revenue streams.
Let’s look at existing smart industrial systems in different verticals:
Manufacturing
Enterprise IoT in manufacturing uses predictive maintenance to reduce unplanned downtime and wearable technology to improve worker safety. IoT applications can predict machine failure before it happens, reducing production downtime. Wearables in helmets and wristbands, as well as computer vision cameras, are used to warn workers about potential hazards.
Automobile
Sensor-driven analytics and robotics increase efficiency in automobile manufacturing and maintenance. For example, industrial sensors are used to provide 3D real-time images of internal vehicle components. Diagnostics and troubleshooting can be done much faster while the IoT system orders replacement parts automatically.
Logistics and transport
Commercial and Industrial IoT devices can help with supply chain management, including inventory management, vendor relationships, fleet management, and scheduled maintenance. Shipping companies use Industrial IoT applications to keep track of assets and optimize fuel consumption on shipping routes. The technology is especially useful for tight temperature control in refrigerated containers. Supply chain managers make informed predictions through smart routing and rerouting algorithms.
Retail
Amazon is driving innovation in automation and human-machine collaboration in retail. Amazon facilities make use of internet-connected robots for tracking, locating, sorting, and moving products.
How can IoT improve our lives?
The Internet of Things has a wide-ranging impact on human life and work. It allows machines to do more heavy lifting, take over tedious tasks and make life more healthy, productive, and comfortable.
For example, connected devices could change your entire morning routine. When you hit the snooze button, your alarm clock would automatically get the coffee machine to turn on and open your window blinds. Your refrigerator would auto-detect finishing groceries and order them for home delivery. Your smart oven would tell you the menu for the day — it might even cook pre-assembled ingredients and make sure your lunch is ready. Your smartwatch will schedule meetings as your connected car automatically sets the GPS to stop for a fuel refill. The opportunities are endless in an IoT world!
What are the benefits of IoT for business?
Accelerate innovation
The Internet of Things gives businesses access to advanced analytics that uncover new opportunities. For example, businesses can create highly targeted advertising campaigns by collecting data on customer behavior.
Turn data into insights and actions with AI and ML
Collected data and historical trends can be used to predict future outcomes. For example, warranty information can be paired with IoT-collected data to predict maintenance incidents. This can be used to proactively provide customer service and build customer loyalty.
Increase security
Continuous monitoring of digital and physical infrastructure can optimize performance, improve efficiency and reduce safety risks. For example, data collected from an onsite monitor can be combined with hardware and firmware version data to automatically schedule system updates.
Scale differentiated solutions
IoT technologies can be deployed in a customer focused way to increase satisfaction. For example, trending products can be restocked promptly to avoid shortages.
https://aws.amazon.com/what-is/iot/#:~:text=The%20term%20IoT%2C%20or%20Internet,as%20between%20the%20devices%20themselves.
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Rely solely on the context provided to you to answer any questions. Never use any external resources or prior knowledge to answer questions. Limit your responses to five paragraphs or less. | Discuss the concept of student loan repayment and forgiveness programs and their relationship to employment choices. | In assessing the influence of a loan forgiveness or loan repayment program on an individual’s
employment choice, one issue to consider is whether, in the absence of such a program, the
recipient would have engaged in the qualifying service. Information on the influence of such
programs might be gleaned from an examination that compares the career paths of individuals
who have access to loan forgiveness or loan repayment benefits with the career paths of otherwise
similarly situated individuals without such access. These types of evaluations generally have not
been conducted for federal loan forgiveness and loan repayment programs. However, some data
from one federal program may be instructive.
The National Institutes of Health (NIH) examined the career trajectories of loan repayment
recipients in its Intramural Research Program (IRP) and compared them with similar individuals
who did not receive loan repayment under the IRP. The purposes of the IRP’s loan repayment
component is to encourage individuals to complete medical research at the NIH and to encourage
qualified health professionals to continue careers in medical research in general (e.g., at a university). The NIH found that individuals receiving loan repayment benefits were more likely
to continue conducting medical research at the NIH than those who did not. Likewise, individuals
who received loan repayment benefits but then left the NIH were more likely to continue a career
as a medical researcher than those who did not.56 This study suggests that the program may be
meeting its stated goals.
While the NIH study indicates that its loan repayment program may be meeting its stated goals,
the loan repayment program is unlikely the sole reason for at least some of the individuals to
remain in the NIH’s targeted positions. Other research has found that some individuals would
have entered certain fields or taken certain positions in the absence of loan repayments for a
variety of other reasons. If this were true, then the program would not have been necessary and,
therefore, might be considered ineffective. For example, a loan repayment program may be an
effective incentive when jobs are plentiful for recent graduates who are weighing multiple
employment opportunities but may be unnecessary when there are fewer employment
opportunities. In relatively recent years, for instance, law school graduates have had fewer
employment opportunities57 and may take a public interest or government job because of more
limited private sector opportunities. Finally, individuals who accept loan repayment for a specific
job might have taken the same job without loan repayment benefits. For example, one study
found that healthcare providers who practice in rural areas would have done so without receiving
a loan repayment award.58
Although in some cases loan forgiveness or loan repayment programs may appear to be
unnecessary, in some instances there is evidence showing that participants would likely not have
taken a particular position but for loan repayment. For example, the NIH examined its IRP loan
repayment program and found that most loan repayment award recipients had competing job
offers and stated that the potential for loan repayment was an attractive benefit that was unique to
the NIH employment. This was particularly true for physicians who often had competing job
offers at higher salaries. Physicians who received loan repayment benefits were also more likely
to remain in research at the NIH, which demonstrates that loan repayment may be an important
recruitment and retention tool.59
Other federal agencies have found that loan repayment programs are effective at recruiting and
maintaining staff, but there are indications that some aspects of a program’s design may
undermine its effectiveness.60 For example, discretionary programs may have their funding
reduced or cut altogether, thus making the availability of loan repayment benefits to individuals
uncertain. The effectiveness of these programs as a recruitment incentive may be hard to
determine because job applicants do not know whether they will receive a loan repayment award
until after having accepted a job.61 Additionally, loan repayment award amounts may not be a sufficient incentive for individuals to
enter into and remain in certain professions. Some researchers have theorized that loan repayment
programs may be more likely to be successful in meeting recruitment and retention needs if the
financial benefits are sufficiently meaningful to offset a reasonable share of the costs associated
with borrowing to pursue a postsecondary education.62
Similarly, in some circumstances, while the dollar amount of loan repayment benefits may be
perceived as sufficient, additional program design elements such as an individual’s responsibility
to pay federal income taxes associated with receiving a loan payment may make the benefit less
attractive for an individual. Specifically, under the Government Employee Student Loan
Repayment Program (GESLRP), participants are responsible for the tax liability, which some
agencies estimate can account for 39% of the loan repayment amount.63 Some agencies suggest
that this makes the program less attractive to participants than it would be if benefits were
excluded from taxation.64
Another consideration is the short-term nature of many of these programs (e.g., providing loan
repayment benefits in exchange for a two-year employment commitment), which may contribute
to turnover, as individuals may decide to change jobs once they have realized the full benefit of a
program. This could possibly lead to a less stable workforce for employers. For example, some
researchers have found that individuals who have a service obligation have shorter tenures in a
particular position than do individuals who do not have service obligations.65
| In assessing the influence of a loan forgiveness or loan repayment program on an individual’s
employment choice, one issue to consider is whether, in the absence of such a program, the
recipient would have engaged in the qualifying service. Information on the influence of such
programs might be gleaned from an examination that compares the career paths of individuals
who have access to loan forgiveness or loan repayment benefits with the career paths of otherwise
similarly situated individuals without such access. These types of evaluations generally have not
been conducted for federal loan forgiveness and loan repayment programs. However, some data
from one federal program may be instructive.
The National Institutes of Health (NIH) examined the career trajectories of loan repayment
recipients in its Intramural Research Program (IRP) and compared them with similar individuals
who did not receive loan repayment under the IRP. The purposes of the IRP’s loan repayment
component is to encourage individuals to complete medical research at the NIH and to encourage
qualified health professionals to continue careers in medical research in general (e.g., at a university). The NIH found that individuals receiving loan repayment benefits were more likely
to continue conducting medical research at the NIH than those who did not. Likewise, individuals
who received loan repayment benefits but then left the NIH were more likely to continue a career
as a medical researcher than those who did not.56 This study suggests that the program may be
meeting its stated goals.
While the NIH study indicates that its loan repayment program may be meeting its stated goals,
the loan repayment program is unlikely the sole reason for at least some of the individuals to
remain in the NIH’s targeted positions. Other research has found that some individuals would
have entered certain fields or taken certain positions in the absence of loan repayments for a
variety of other reasons. If this were true, then the program would not have been necessary and,
therefore, might be considered ineffective. For example, a loan repayment program may be an
effective incentive when jobs are plentiful for recent graduates who are weighing multiple
employment opportunities but may be unnecessary when there are fewer employment
opportunities. In relatively recent years, for instance, law school graduates have had fewer
employment opportunities57 and may take a public interest or government job because of more
limited private sector opportunities. Finally, individuals who accept loan repayment for a specific
job might have taken the same job without loan repayment benefits. For example, one study
found that healthcare providers who practice in rural areas would have done so without receiving
a loan repayment award.58
Although in some cases loan forgiveness or loan repayment programs may appear to be
unnecessary, in some instances there is evidence showing that participants would likely not have
taken a particular position but for loan repayment. For example, the NIH examined its IRP loan
repayment program and found that most loan repayment award recipients had competing job
offers and stated that the potential for loan repayment was an attractive benefit that was unique to
the NIH employment. This was particularly true for physicians who often had competing job
offers at higher salaries. Physicians who received loan repayment benefits were also more likely
to remain in research at the NIH, which demonstrates that loan repayment may be an important
recruitment and retention tool.59
Other federal agencies have found that loan repayment programs are effective at recruiting and
maintaining staff, but there are indications that some aspects of a program’s design may
undermine its effectiveness.60 For example, discretionary programs may have their funding
reduced or cut altogether, thus making the availability of loan repayment benefits to individuals
uncertain. The effectiveness of these programs as a recruitment incentive may be hard to
determine because job applicants do not know whether they will receive a loan repayment award
until after having accepted a job.61 Additionally, loan repayment award amounts may not be a sufficient incentive for individuals to
enter into and remain in certain professions. Some researchers have theorized that loan repayment
programs may be more likely to be successful in meeting recruitment and retention needs if the
financial benefits are sufficiently meaningful to offset a reasonable share of the costs associated
with borrowing to pursue a postsecondary education.62
Similarly, in some circumstances, while the dollar amount of loan repayment benefits may be
perceived as sufficient, additional program design elements such as an individual’s responsibility
to pay federal income taxes associated with receiving a loan payment may make the benefit less
attractive for an individual. Specifically, under the Government Employee Student Loan
Repayment Program (GESLRP), participants are responsible for the tax liability, which some
agencies estimate can account for 39% of the loan repayment amount.63 Some agencies suggest
that this makes the program less attractive to participants than it would be if benefits were
excluded from taxation.64
Another consideration is the short-term nature of many of these programs (e.g., providing loan
repayment benefits in exchange for a two-year employment commitment), which may contribute
to turnover, as individuals may decide to change jobs once they have realized the full benefit of a
program. This could possibly lead to a less stable workforce for employers. For example, some
researchers have found that individuals who have a service obligation have shorter tenures in a
particular position than do individuals who do not have service obligations.65
Rely solely on the context provided to you to answer any questions. Never use any external resources or prior knowledge to answer questions. Limit your responses to five paragraphs or less.
Discuss the concept of student loan repayment and forgiveness programs and their relationship to employment choices. |
Only use the information shared in the context to answer the questions.
Do not rely on external sources or your inherent knowledge to answer the question.
If a meaningful answer cannot be generated from the context, do not hallucinate. | Explain the text in simple terms without leaving out any information | Section 161. Expansion of Family Caregiver Program of the VA
Eligibility
This section amends 38 U.S.C. §1720G(a)(2) to expand eligibility for the Comprehensive
Caregiver Program to pre-9/11 veterans, beginning on the date when the Secretary submits to
Congress the certification that the VA has fully implemented the IT system (described in Section
162), herein referred to as the certification date. Beginning on the certification date, the
Comprehensive Caregiver Program is extended over a two-year period to pre-9/11 veterans who
have a serious injury incurred or aggravated in the line of duty in the active military, naval, or air
service on or before May 7, 1975. Two years after the certification date, the Comprehensive Care
Program is extended to all pre-9/11 veterans, covering veterans of all eras. It requires the
Secretary, no later than 30 days after the date the Secretary submits to Congress the above
certification, to publish the certification date in the Federal Register.
It also amends 38 U.S.C. §1720G(a)(2) to expand the eligibility criteria for the Comprehensive
Caregiver Program to include those veterans in need of personal care services because of a need
for regular or extensive instruction or supervision, without which the ability of the veteran to
function in daily life would be seriously impaired, among other existing criteria.
Caregiver Assistance
This section amends 38 U.S.C. §1720G(a)(3) to expand the types of assistance available to family
caregivers under the Comprehensive Care Program to include financial planning services and
legal services relating to the needs of injured veterans and their caregivers. It further amends this
subsection regarding the monthly stipend determination to specify that in determining the amount
and degree of personal care services provided to an eligible veteran whose need is based on a
need for supervision or protection, as specified, or regular instruction or supervision, as specified,
the determination must take into account (1) the assessment by the family caregiver; (2) the
extent to which the veteran can function safely and independently without supervision, protection,
or instruction; and (3) the amount of time required for the family caregiver to provide
supervision, protection, or instruction.
It also adds new language under 38 U.S.C. §1720G(a)(3) that in providing instruction,
preparation, and training to each approved family caregiver, the Secretary is required to
VA MISSION Act of 2018 (P.L.115-182)
Congressional Research Service R45390 · VERSION 2 · UPDATED 30
periodically evaluate the needs of the eligible veteran and the skills of the family caregiver to
determine if additional support is necessary. It amends 38 U.S.C. §1720(a)(5) to require the
Secretary to evaluate each application submitted jointly by an eligible veteran in collaboration
with the primary care team for the eligible veteran to the maximum extent practicable.
It further adds a new paragraph under 38 U.S.C. §1720(a) that in providing assistance to family
caregivers of eligible veterans, the Secretary may enter into contracts or agreements with
specified entities to provide family caregivers such assistance. The Secretary is required to
provide such assistance only if it is reasonably accessible to the family caregiver and is
substantially equivalent or better in quality to similar services provided by the VA. It authorizes
the Secretary to provide fair compensation to federal agencies, states, and other entities that
provide such assistance.
It amends the definition of personal care services under 38 U.S.C. §1720(d)(4) to include services
that provide the veteran with (1) supervision or protection based on symptoms or residuals of
neurological or other impairment or injury, and (2) regular or extensive instruction or supervision
without which the ability of the veteran to function in daily life would be seriously impaired.
Section 162. Implementation of Information Technology System of the VA to
Assess and Improve the Family Caregiver Program
This section requires the Secretary to implement an IT system, no later than October 1, 2018,
with certain specified elements that fully supports the Comprehensive Caregiver Program and
allows for data assessment and program monitoring. No later than 180 days after implementing
the IT system, the Secretary is required, through the Under Secretary for Health, to conduct an
assessment of how key aspects of the Comprehensive Caregiver Program are structured and
carried out using data from the IT system and any other relevant data. The Secretary is required to
use the IT system to monitor and assess program workload, and to implement certain
modifications necessary to ensure program functioning and timeliness of services.
It also requires the Secretary, no later than 90 days after enactment, to submit an initial report to
the SVAC, HVAC, and GAO on the status of the planning, development, and deployment of the
IT system. The initial report must include an assessment of the needs of family caregivers of
veterans eligible for the Comprehensive Program solely due to a serious injury incurred or
aggravated in the line of duty in the active military, naval, or air service before September 11,
2001; the resource needs for including such family caregivers; and any changes necessary to
ensure successful program expansion. The GAO is required to review the initial report and notify
SVAC and HVAC with respect to the progress of the Secretary in fully implementing the required
IT system, as well implementation of a process to monitor, assess, and modify the program as
necessary. No later than October 1, 2019, the Secretary is required to submit a final report to
SVAC, HVAC, and the GAO on system implementation, including program monitoring,
assessment, and modification, as specified. | SYSTEM INSTRUCTIONS:
Only use the information shared in the context to answer the questions.
Do not rely on external sources or your inherent knowledge to answer the question.
If a meaningful answer cannot be generated from the context, do not hallucinate.
CONTEXT:
Section 161. Expansion of Family Caregiver Program of the VA
Eligibility
This section amends 38 U.S.C. §1720G(a)(2) to expand eligibility for the Comprehensive
Caregiver Program to pre-9/11 veterans, beginning on the date when the Secretary submits to
Congress the certification that the VA has fully implemented the IT system (described in Section
162), herein referred to as the certification date. Beginning on the certification date, the
Comprehensive Caregiver Program is extended over a two-year period to pre-9/11 veterans who
have a serious injury incurred or aggravated in the line of duty in the active military, naval, or air
service on or before May 7, 1975. Two years after the certification date, the Comprehensive Care
Program is extended to all pre-9/11 veterans, covering veterans of all eras. It requires the
Secretary, no later than 30 days after the date the Secretary submits to Congress the above
certification, to publish the certification date in the Federal Register.
It also amends 38 U.S.C. §1720G(a)(2) to expand the eligibility criteria for the Comprehensive
Caregiver Program to include those veterans in need of personal care services because of a need
for regular or extensive instruction or supervision, without which the ability of the veteran to
function in daily life would be seriously impaired, among other existing criteria.
Caregiver Assistance
This section amends 38 U.S.C. §1720G(a)(3) to expand the types of assistance available to family
caregivers under the Comprehensive Care Program to include financial planning services and
legal services relating to the needs of injured veterans and their caregivers. It further amends this
subsection regarding the monthly stipend determination to specify that in determining the amount
and degree of personal care services provided to an eligible veteran whose need is based on a
need for supervision or protection, as specified, or regular instruction or supervision, as specified,
the determination must take into account (1) the assessment by the family caregiver; (2) the
extent to which the veteran can function safely and independently without supervision, protection,
or instruction; and (3) the amount of time required for the family caregiver to provide
supervision, protection, or instruction.
It also adds new language under 38 U.S.C. §1720G(a)(3) that in providing instruction,
preparation, and training to each approved family caregiver, the Secretary is required to
VA MISSION Act of 2018 (P.L.115-182)
Congressional Research Service R45390 · VERSION 2 · UPDATED 30
periodically evaluate the needs of the eligible veteran and the skills of the family caregiver to
determine if additional support is necessary. It amends 38 U.S.C. §1720(a)(5) to require the
Secretary to evaluate each application submitted jointly by an eligible veteran in collaboration
with the primary care team for the eligible veteran to the maximum extent practicable.
It further adds a new paragraph under 38 U.S.C. §1720(a) that in providing assistance to family
caregivers of eligible veterans, the Secretary may enter into contracts or agreements with
specified entities to provide family caregivers such assistance. The Secretary is required to
provide such assistance only if it is reasonably accessible to the family caregiver and is
substantially equivalent or better in quality to similar services provided by the VA. It authorizes
the Secretary to provide fair compensation to federal agencies, states, and other entities that
provide such assistance.
It amends the definition of personal care services under 38 U.S.C. §1720(d)(4) to include services
that provide the veteran with (1) supervision or protection based on symptoms or residuals of
neurological or other impairment or injury, and (2) regular or extensive instruction or supervision
without which the ability of the veteran to function in daily life would be seriously impaired.
Section 162. Implementation of Information Technology System of the VA to
Assess and Improve the Family Caregiver Program
This section requires the Secretary to implement an IT system, no later than October 1, 2018,
with certain specified elements that fully supports the Comprehensive Caregiver Program and
allows for data assessment and program monitoring. No later than 180 days after implementing
the IT system, the Secretary is required, through the Under Secretary for Health, to conduct an
assessment of how key aspects of the Comprehensive Caregiver Program are structured and
carried out using data from the IT system and any other relevant data. The Secretary is required to
use the IT system to monitor and assess program workload, and to implement certain
modifications necessary to ensure program functioning and timeliness of services.
It also requires the Secretary, no later than 90 days after enactment, to submit an initial report to
the SVAC, HVAC, and GAO on the status of the planning, development, and deployment of the
IT system. The initial report must include an assessment of the needs of family caregivers of
veterans eligible for the Comprehensive Program solely due to a serious injury incurred or
aggravated in the line of duty in the active military, naval, or air service before September 11,
2001; the resource needs for including such family caregivers; and any changes necessary to
ensure successful program expansion. The GAO is required to review the initial report and notify
SVAC and HVAC with respect to the progress of the Secretary in fully implementing the required
IT system, as well implementation of a process to monitor, assess, and modify the program as
necessary. No later than October 1, 2019, the Secretary is required to submit a final report to
SVAC, HVAC, and the GAO on system implementation, including program monitoring,
assessment, and modification, as specified.
QUESTION:
Explain the text in simple terms without leaving out any information. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | Zero trust eliminates traditional VPNs as a secure solution by not allowing any ransomware in the first place; it requires no extra measures like encryption or network segmentation. Explain how the Zero Trust provide maximum security assurance, and how does it surpass VPNs, rendering them irrelevant for external connections and internal connections? | Introduction
In May 2021, a group of hackers attacked a VPN that required only a single authentication password and gained access to the organizational network. They then demanded $4.4 million in ransom to return control of the network. In response, the company shut down its operations, which led to a fuel shortage across the east coast of the United States. The Colonial Pipeline ransomware attack was underway, and the cybersecurity industry would never be the same.
Ransomware attacks have grown increasingly common (and expensive) in recent years, but organizations like yours are not doomed to become victims.
Zero trust is a modern and innovative security model designed to severely limit the damage that ransomware and other cyberattacks can cause.
By never inherently trusting users or devices and instead continuously verifying them before granting access, the zero trust framework:
Prevents attackers from gaining easy access to critical applications
And severely curtails their ability to cause damage if they do get in.
In this white paper, we will examine what zero trust is and outline how to implement zero trust access in order to prevent costly and damaging ransomware attacks.
Topics that we'll cover include:
Zero Trust and Ransomware
The current state of ransomware attacks
What is Zero Trust
Introducing Zero Trust to the Organization
How Zero Trust mitigates ransomware attacks
Securing the organization with zero trust
ZTNA vs. VPNs
Choosing a ZTNA provider
Implementing Zero Trust in the Organization
A phased approach to Zero Trust adoption
Zero Trust and Ransomware
Ransomware Attacks: A Costly and Worrying Reality
In 2021, the number of ransomware attacks significantly increased compared to 2020, which itself saw a 150% ransomware increase compared to 2019. The number of attacks is expected to grow even more in 2022.
Every month, hundreds of thousands of ransomware attacks will take place, targeting enterprises, businesses and people.
Between 2019 and 2020, the amount paid by ransomware victims rose by 300%.
The actual ransom demands made by attackers have also grown in recent years. Between 2019 and 2020, the amount paid by victims rose by 300%. In the first six months of 2021, ransomware payments reported by banks and other financial institutions totaled $590 million. 2021 also saw the largest ransomware demands ever per attack, with attackers demanding tens of millions of dollars following a single breach.
It is clear that existing security controls, designed largely for yesterday’s legacy systems, are no longer sufficient in a world of global networks and complex cloud architectures. To prevent ransomware attacks, a new forward-looking approach is needed: zero trust.
It is clear that existing security controls, designed largely for yesterday’s legacy systems, are no longer sufficient in a world of global networks and complex cloud architectures.
To prevent ransomware attacks, a new forward-looking approach is needed: Zero trust.
What is Zero Trust?
Zero trust is a modern security architecture and model that can help mitigate ransomware attacks.
Zero trust is based on the premise “Never trust, always verify,” which means that no user or machine is granted access (trust) until they are authorized.
The three main principles of Zero Trust are:
How Does Zero Trust Work?
Zero trust is founded on the principle that no person or device should be granted system access based on inherent trust.
Instead, zero trust assumes that the network has already been compromised. Therefore, no user or device can access systems or assets without first being authorized via strong authentication methods like MFA (multi-factor authentication).
As an added security measure, users are continuously verified even after their initial authorization.
How Zero Trust Helps Mitigate Ransomware
Ransomware perpetrators attack networks and critical applications and threaten to leak or destroy valuable data unless a hefty ransom is paid. Zero trust access policies prevent the spread of ransomware.
When zero trust is implemented:
Ransomware attackers are blocked from accessing critical applications.
Ransomware attackers are prevented from moving laterally, mitigating their ability to access and leak data.
Ransomware attackers cannot see the different system components, target them and gain a foothold.see
Auditing and recording capabilities help detect breaches and prevent further damage.
The network is hidden, preventing attack methods like IP scanning.
Potentially vulnerable VPNs are enhanced by adding an extra layer of security.
Introducing Zero Trust to the Organization
Securing the Organization with Zero Trust
To operationally execute zero trust, it’s important to implement a technology that can secure the following domains:
Data
People
Devices
Networks
Workloads
The zero trust technology used to secure these domains is called ZTNA (zero trust network access).
ZTNA is a software perimeter that applies the zero trust principles when authorizing users and services.
ZTNA vs. VPNs
Many organizations use VPNs to secure their critical applications, especially when providing access for remote users and third parties like partners and contractors.
However, VPNs are not secure. First, VPNs provide external users with too much access. Any authenticated user has access to the entire network, including databases and infrastructure. In addition, VPNs providers often have major security vulnerabilities - as recent security incidents such as the Solar Winds cyberattack have demonstrated.
Choosing a ZTNA Provider
The zero trust tenet of “never trust, always verify” also relates to the vendors that provide zero trust access solutions.
Quite paradoxically, most ZTNA providers actually demand inherent trust from their customers by requiring those customers to place their most sensitive assets, including encrypted content, passwords, and user data, in the provider’s cloud.
Think of a parking valet, who holds the keys to all the cars in the lot. Rather than attacking individual car owners, a thief’s best bet would clearly be to attack the valet with his many keys.
In this same way, security vendors are a tempting target for cybercriminals. This includes ZTNA providers who have access to the crown jewels of all their customers. In light of this reality, it is recommended to choose a ZTNA vendor whose architecture cannot potentially compromise your organization.
Ask these 7 questions when selecting a ZTNA provider to ensure you don’t have to trust anyone – even the provider themselves:
Is the users’ data exposed?
Who has control of the access rules?
Where are our secrets (passwords, tokens, private keys) kept?
How is the risk of internal threats mitigated?
What is the scope of secure access? Does it include users, networks, apps, etc.?
What is the ZTNA provider’s infrastructure? Are the servers located in the cloud or in a data center? Who can access it?
What happens if the ZTNA | [question]
Zero trust eliminates traditional VPNs as a secure solution by not allowing any ransomware in the first place; it requires no extra measures like encryption or network segmentation. Explain how the Zero Trust provide maximum security assurance, and how does it surpass VPNs, rendering them irrelevant for external connections and internal connections?
=====================
[text]
Introduction
In May 2021, a group of hackers attacked a VPN that required only a single authentication password and gained access to the organizational network. They then demanded $4.4 million in ransom to return control of the network. In response, the company shut down its operations, which led to a fuel shortage across the east coast of the United States. The Colonial Pipeline ransomware attack was underway, and the cybersecurity industry would never be the same.
Ransomware attacks have grown increasingly common (and expensive) in recent years, but organizations like yours are not doomed to become victims.
Zero trust is a modern and innovative security model designed to severely limit the damage that ransomware and other cyberattacks can cause.
By never inherently trusting users or devices and instead continuously verifying them before granting access, the zero trust framework:
Prevents attackers from gaining easy access to critical applications
And severely curtails their ability to cause damage if they do get in.
In this white paper, we will examine what zero trust is and outline how to implement zero trust access in order to prevent costly and damaging ransomware attacks.
Topics that we'll cover include:
Zero Trust and Ransomware
The current state of ransomware attacks
What is Zero Trust
Introducing Zero Trust to the Organization
How Zero Trust mitigates ransomware attacks
Securing the organization with zero trust
ZTNA vs. VPNs
Choosing a ZTNA provider
Implementing Zero Trust in the Organization
A phased approach to Zero Trust adoption
Zero Trust and Ransomware
Ransomware Attacks: A Costly and Worrying Reality
In 2021, the number of ransomware attacks significantly increased compared to 2020, which itself saw a 150% ransomware increase compared to 2019. The number of attacks is expected to grow even more in 2022.
Every month, hundreds of thousands of ransomware attacks will take place, targeting enterprises, businesses and people.
Between 2019 and 2020, the amount paid by ransomware victims rose by 300%.
The actual ransom demands made by attackers have also grown in recent years. Between 2019 and 2020, the amount paid by victims rose by 300%. In the first six months of 2021, ransomware payments reported by banks and other financial institutions totaled $590 million. 2021 also saw the largest ransomware demands ever per attack, with attackers demanding tens of millions of dollars following a single breach.
It is clear that existing security controls, designed largely for yesterday’s legacy systems, are no longer sufficient in a world of global networks and complex cloud architectures. To prevent ransomware attacks, a new forward-looking approach is needed: zero trust.
It is clear that existing security controls, designed largely for yesterday’s legacy systems, are no longer sufficient in a world of global networks and complex cloud architectures.
To prevent ransomware attacks, a new forward-looking approach is needed: Zero trust.
What is Zero Trust?
Zero trust is a modern security architecture and model that can help mitigate ransomware attacks.
Zero trust is based on the premise “Never trust, always verify,” which means that no user or machine is granted access (trust) until they are authorized.
The three main principles of Zero Trust are:
How Does Zero Trust Work?
Zero trust is founded on the principle that no person or device should be granted system access based on inherent trust.
Instead, zero trust assumes that the network has already been compromised. Therefore, no user or device can access systems or assets without first being authorized via strong authentication methods like MFA (multi-factor authentication).
As an added security measure, users are continuously verified even after their initial authorization.
How Zero Trust Helps Mitigate Ransomware
Ransomware perpetrators attack networks and critical applications and threaten to leak or destroy valuable data unless a hefty ransom is paid. Zero trust access policies prevent the spread of ransomware.
When zero trust is implemented:
Ransomware attackers are blocked from accessing critical applications.
Ransomware attackers are prevented from moving laterally, mitigating their ability to access and leak data.
Ransomware attackers cannot see the different system components, target them and gain a foothold.see
Auditing and recording capabilities help detect breaches and prevent further damage.
The network is hidden, preventing attack methods like IP scanning.
Potentially vulnerable VPNs are enhanced by adding an extra layer of security.
Introducing Zero Trust to the Organization
Securing the Organization with Zero Trust
To operationally execute zero trust, it’s important to implement a technology that can secure the following domains:
Data
People
Devices
Networks
Workloads
The zero trust technology used to secure these domains is called ZTNA (zero trust network access).
ZTNA is a software perimeter that applies the zero trust principles when authorizing users and services.
ZTNA vs. VPNs
Many organizations use VPNs to secure their critical applications, especially when providing access for remote users and third parties like partners and contractors.
However, VPNs are not secure. First, VPNs provide external users with too much access. Any authenticated user has access to the entire network, including databases and infrastructure. In addition, VPNs providers often have major security vulnerabilities - as recent security incidents such as the Solar Winds cyberattack have demonstrated.
Choosing a ZTNA Provider
The zero trust tenet of “never trust, always verify” also relates to the vendors that provide zero trust access solutions.
Quite paradoxically, most ZTNA providers actually demand inherent trust from their customers by requiring those customers to place their most sensitive assets, including encrypted content, passwords, and user data, in the provider’s cloud.
Think of a parking valet, who holds the keys to all the cars in the lot. Rather than attacking individual car owners, a thief’s best bet would clearly be to attack the valet with his many keys.
In this same way, security vendors are a tempting target for cybercriminals. This includes ZTNA providers who have access to the crown jewels of all their customers. In light of this reality, it is recommended to choose a ZTNA vendor whose architecture cannot potentially compromise your organization.
Ask these 7 questions when selecting a ZTNA provider to ensure you don’t have to trust anyone – even the provider themselves:
Is the users’ data exposed?
Who has control of the access rules?
Where are our secrets (passwords, tokens, private keys) kept?
How is the risk of internal threats mitigated?
What is the scope of secure access? Does it include users, networks, apps, etc.?
What is the ZTNA provider’s infrastructure? Are the servers located in the cloud or in a data center? Who can access it?
What happens if the ZTNA
https://cyolo.io/white-papers/how-to-stop-ransomware-attacks-with-zero-trust#:~:text=Zero%20trust%20access%20policies%20prevent,to%20access%20and%20leak%20data.
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Use only the article provided to answer the question, do not utilize any outside knowledge. Answer in full sentences. | Where did the Elysian speakers' name came from? | Elysian Speakers: Turning Your Home into a Sonic Heaven
The culmination of a special three-year R&D project, Wharfedale’s new flagship speakers are the ultimate expression of pure musical enjoyment embedded at the heart of this famous British brand.
Cambridgeshire, England – In Greek mythology, the Elysian Fields are a heavenly place where the heroic and the virtuous are rewarded in the afterlife. Elysian, as an adjective, means blissful – a fitting name for Wharfedale’s new flagship loudspeakers, conceived to deliver blissful sonic reward to music lovers seeking heavenly respite from the rigours of daily life.
Two Elysian loudspeakers have been created; a large standmount model called Elysian 2 (£4,500 per pair) and a floorstanding speaker named Elysian 4 (£6,500 per pair). Both speakers were developed concurrently with the EVO4 Series, which launched towards the end of 2019 and sits immediately below Elysian in Wharfedale’s new hierarchy.
Elysian and EVO4 share many design elements, having benefitted from the same R&D process as part of a unified project. The core speakers in both ranges are three-way designs, sporting an unusual and sophisticated driver array including an Air Motion Transformer (AMT) high-frequency unit. But with Elysian, each component part is engineered to the very highest standard. The drivers, the crossover, the cabinet – every aspect has been developed in harmony and without cost constraint to push the project to its performance limit. Many of these elements were trickled down, remodelled and engineered in a more affordable form to create the EVO4 Series, enabling the speakers in this range to deliver astounding value for money.
With the EVO4 Series already winning major awards, now is the time for Wharfedale to unleash Elysian.
Elysian AMT Treble Unit
In the 1940s, Wharfedale’s founder, Gilbert Briggs, developed the first two-way speaker for domestic use, radical for its separate treble and bass drivers combined via a crossover. Early treble units, such as Brigg’s famous Super 3, resembled small cones as the necessity for reproducing high frequencies demanded the use of low-mass diaphragms. It was subsequently realized that the dome in the centre of the cone was doing most of the work and this gave rise to the now ubiquitous dome tweeter.
Dome treble units have dominated the hi-fi scene for decades, but they are not necessarily the ideal way of reproducing the exact harmonics of the musical waveform. In order to reduce moving mass, the dome has to be small and use ultra-thin materials, both of which counteract its efficiency and accuracy.
The AMT is a radically different way of moving air, using a large, pleated, lightweight diaphragm driven across its surface by rows of strategically placed metallic strips immersed in a strong magnetic field. The pleats in the diaphragm contract and expand under the influence of the musical drive, squeezing the air between them to form the desired waveform.
Not only is this an efficient way of moving air, it is also highly accurate as the diaphragm is under close control of the motor system at all times. The result is a wide bandwidth transducer that achieves extremely low distortion and wonderful musical detail, with scintillating speed and dynamic ability.
The development of this AMT unit was a key part of the Elysian/EVO4 project. The Elysian AMT is larger and of higher specification than the one used in the EVO4 Series, featuring an ultra-lightweight diaphragm material called PET and an acoustically damped rear chamber, delivering even more spellbindingly clear and sweetly extended high frequencies.
Elysian Midrange Driver
In the spirit of Gilbert Briggs, who was well known for experimenting with new driver materials, Wharfedale investigated a range of options to match the sensitivity and accuracy of the AMT treble unit.
The company settled on a proprietary woven glass fibre matrix, formed into a 150mm cone. This provides a superb combination of low mass and high strength, with the addition of a high-plasticity coating to control its acoustic behaviour. With such a low-mass cone, only a low-damping, foamed, rubber-like material would match for the surround – again, coated for durability.
A central phase plug is specially shaped to linearise the output across a wide bandwidth, even off-axis, enhancing a natural response to the music that can be heard anywhere the listener wishes to sit.
This midrange driver enables the Elysian speakers to deliver voices and instruments with astonishing realism. Vocalists seem to be present in the room with the listener – simply close your eyes and listen to the palpable presence of singers in the acoustic space.
Elysian Bass Driver
The midrange unit is matched with a glass fibre matrix cone for the bass unit, in this case terminated with a highly flexible rubber surround and driven by a specially developed low-distortion motor system.
In order to plumb the depths of the lowest frequencies in recorded music, the 220mm bass units – one in the Elysian 2, two in the Elysian 4 – are capable of reaching down below 28Hz in-room, revealing the full body and impact of percussion, stringed and wind instruments.
The bass units are loaded by an advanced version of Wharfedale’s signature slot-loaded port. Christened SLPP (Slot-Loaded Profiled Port), it ensures that the rear output of the bass units is not wasted. Instead, the lowest frequency energy is vented to a slot at the base of the speaker, specially profiled to equalise the high internal pressure to the low pressure in the room.
This reduces the distortion that is typical of bass reflex systems and increases the port’s efficiency. In addition, because the air is dispersed uniformly in the room, the speakers are less fussy about siting.
Elysian Crossover Network
The drivers’ output is combined via a sophisticated crossover network, fine-tuned over hundreds of hours of listening tests to ensure a seamless blend between the drive units.
Of particular note is the phase consistency across the driver output, permitting a wide range of seating positions and encouraging the power response to be highly linear throughout the listening room. Only the highest grade, acoustically transparent components are used throughout the crossover network, ensuring all the musical detail is heard with nothing held back.
Elysian Cabinet
The Elysian speaker cabinets are designed to enhance the acoustic output of the drivers and provide a piece of furniture of which the owner can be justly proud. Handcrafted veneers are enhanced by true piano finish lacquers, hand-polished to six levels of depth to give a truly beautiful finish.
Beneath the veneer is a sandwich of woods of differing density, designed to reduce panel resonance to below audibility. Named PROS (Panel Resonance Optimization System), this multi-layer construction also inhibits the leakage of unwanted sound energy from inside the cabinet, which would otherwise interfere with the forward output of the drive units.
Both Elysian models have a wide stature that complements the output of the drive units by helping them match to the acoustics of the room – an aspect of home audio reproduction that has been largely overlooked amid the modern trend to design slim, compact loudspeakers. One key aspect is the wide baffle, finished with radiused edges to smooth the flow of sonic energy around the cabinet.
Speaking about the development of Elysian, Wharfedale’s Director of Acoustic Design, Peter Comeau, said: “Our aim for the Elysian/EVO4 project was to deliver a new flagship for Wharfedale (Elysian) whilst creating design elements that could be trickled down to a more affordable, mid-range series (EVO4).
While we are justly proud of the value for money offered by the EVO4 Series, the two Elysian models represent the pinnacle of our achievements – loudspeakers that deliver an exceptional high-end performance yet still cost much less than many of the models with which they compete.”
Comeau added, “The cabinet design, in combination with the drive units, allows the Elysian speakers to convey the full body of instruments like cello and bass guitar, without constraining the dynamics of the musical recital. Whether you’re listening to solo piano, a full orchestra, a jazz trio or a rock band, these speakers bring the thrill and excitement of the performance to your listening room.”
The Wharfedale Elysian 2 and Elysian 4 are available in the UK from this month. RRPs are £4,500 per pair (£4,900 with matching stands) and £6,500 per pair respectively. Finish options are walnut, black or white, hand-finished in high-gloss piano lacquer. | [Task Instruction]
==================
Use only the article provided to answer the question, do not utilize any outside knowledge. Answer in full sentences.
----------
[Question]
==================
Where did the Elysian speakers' name came from?
----------
[Text]
==================
Elysian Speakers: Turning Your Home into a Sonic Heaven
The culmination of a special three-year R&D project, Wharfedale’s new flagship speakers are the ultimate expression of pure musical enjoyment embedded at the heart of this famous British brand.
Cambridgeshire, England – In Greek mythology, the Elysian Fields are a heavenly place where the heroic and the virtuous are rewarded in the afterlife. Elysian, as an adjective, means blissful – a fitting name for Wharfedale’s new flagship loudspeakers, conceived to deliver blissful sonic reward to music lovers seeking heavenly respite from the rigours of daily life.
Two Elysian loudspeakers have been created; a large standmount model called Elysian 2 (£4,500 per pair) and a floorstanding speaker named Elysian 4 (£6,500 per pair). Both speakers were developed concurrently with the EVO4 Series, which launched towards the end of 2019 and sits immediately below Elysian in Wharfedale’s new hierarchy.
Elysian and EVO4 share many design elements, having benefitted from the same R&D process as part of a unified project. The core speakers in both ranges are three-way designs, sporting an unusual and sophisticated driver array including an Air Motion Transformer (AMT) high-frequency unit. But with Elysian, each component part is engineered to the very highest standard. The drivers, the crossover, the cabinet – every aspect has been developed in harmony and without cost constraint to push the project to its performance limit. Many of these elements were trickled down, remodelled and engineered in a more affordable form to create the EVO4 Series, enabling the speakers in this range to deliver astounding value for money.
With the EVO4 Series already winning major awards, now is the time for Wharfedale to unleash Elysian.
Elysian AMT Treble Unit
In the 1940s, Wharfedale’s founder, Gilbert Briggs, developed the first two-way speaker for domestic use, radical for its separate treble and bass drivers combined via a crossover. Early treble units, such as Brigg’s famous Super 3, resembled small cones as the necessity for reproducing high frequencies demanded the use of low-mass diaphragms. It was subsequently realized that the dome in the centre of the cone was doing most of the work and this gave rise to the now ubiquitous dome tweeter.
Dome treble units have dominated the hi-fi scene for decades, but they are not necessarily the ideal way of reproducing the exact harmonics of the musical waveform. In order to reduce moving mass, the dome has to be small and use ultra-thin materials, both of which counteract its efficiency and accuracy.
The AMT is a radically different way of moving air, using a large, pleated, lightweight diaphragm driven across its surface by rows of strategically placed metallic strips immersed in a strong magnetic field. The pleats in the diaphragm contract and expand under the influence of the musical drive, squeezing the air between them to form the desired waveform.
Not only is this an efficient way of moving air, it is also highly accurate as the diaphragm is under close control of the motor system at all times. The result is a wide bandwidth transducer that achieves extremely low distortion and wonderful musical detail, with scintillating speed and dynamic ability.
The development of this AMT unit was a key part of the Elysian/EVO4 project. The Elysian AMT is larger and of higher specification than the one used in the EVO4 Series, featuring an ultra-lightweight diaphragm material called PET and an acoustically damped rear chamber, delivering even more spellbindingly clear and sweetly extended high frequencies.
Elysian Midrange Driver
In the spirit of Gilbert Briggs, who was well known for experimenting with new driver materials, Wharfedale investigated a range of options to match the sensitivity and accuracy of the AMT treble unit.
The company settled on a proprietary woven glass fibre matrix, formed into a 150mm cone. This provides a superb combination of low mass and high strength, with the addition of a high-plasticity coating to control its acoustic behaviour. With such a low-mass cone, only a low-damping, foamed, rubber-like material would match for the surround – again, coated for durability.
A central phase plug is specially shaped to linearise the output across a wide bandwidth, even off-axis, enhancing a natural response to the music that can be heard anywhere the listener wishes to sit.
This midrange driver enables the Elysian speakers to deliver voices and instruments with astonishing realism. Vocalists seem to be present in the room with the listener – simply close your eyes and listen to the palpable presence of singers in the acoustic space.
Elysian Bass Driver
The midrange unit is matched with a glass fibre matrix cone for the bass unit, in this case terminated with a highly flexible rubber surround and driven by a specially developed low-distortion motor system.
In order to plumb the depths of the lowest frequencies in recorded music, the 220mm bass units – one in the Elysian 2, two in the Elysian 4 – are capable of reaching down below 28Hz in-room, revealing the full body and impact of percussion, stringed and wind instruments.
The bass units are loaded by an advanced version of Wharfedale’s signature slot-loaded port. Christened SLPP (Slot-Loaded Profiled Port), it ensures that the rear output of the bass units is not wasted. Instead, the lowest frequency energy is vented to a slot at the base of the speaker, specially profiled to equalise the high internal pressure to the low pressure in the room.
This reduces the distortion that is typical of bass reflex systems and increases the port’s efficiency. In addition, because the air is dispersed uniformly in the room, the speakers are less fussy about siting.
Elysian Crossover Network
The drivers’ output is combined via a sophisticated crossover network, fine-tuned over hundreds of hours of listening tests to ensure a seamless blend between the drive units.
Of particular note is the phase consistency across the driver output, permitting a wide range of seating positions and encouraging the power response to be highly linear throughout the listening room. Only the highest grade, acoustically transparent components are used throughout the crossover network, ensuring all the musical detail is heard with nothing held back.
Elysian Cabinet
The Elysian speaker cabinets are designed to enhance the acoustic output of the drivers and provide a piece of furniture of which the owner can be justly proud. Handcrafted veneers are enhanced by true piano finish lacquers, hand-polished to six levels of depth to give a truly beautiful finish.
Beneath the veneer is a sandwich of woods of differing density, designed to reduce panel resonance to below audibility. Named PROS (Panel Resonance Optimization System), this multi-layer construction also inhibits the leakage of unwanted sound energy from inside the cabinet, which would otherwise interfere with the forward output of the drive units.
Both Elysian models have a wide stature that complements the output of the drive units by helping them match to the acoustics of the room – an aspect of home audio reproduction that has been largely overlooked amid the modern trend to design slim, compact loudspeakers. One key aspect is the wide baffle, finished with radiused edges to smooth the flow of sonic energy around the cabinet.
Speaking about the development of Elysian, Wharfedale’s Director of Acoustic Design, Peter Comeau, said: “Our aim for the Elysian/EVO4 project was to deliver a new flagship for Wharfedale (Elysian) whilst creating design elements that could be trickled down to a more affordable, mid-range series (EVO4).
While we are justly proud of the value for money offered by the EVO4 Series, the two Elysian models represent the pinnacle of our achievements – loudspeakers that deliver an exceptional high-end performance yet still cost much less than many of the models with which they compete.”
Comeau added, “The cabinet design, in combination with the drive units, allows the Elysian speakers to convey the full body of instruments like cello and bass guitar, without constraining the dynamics of the musical recital. Whether you’re listening to solo piano, a full orchestra, a jazz trio or a rock band, these speakers bring the thrill and excitement of the performance to your listening room.”
The Wharfedale Elysian 2 and Elysian 4 are available in the UK from this month. RRPs are £4,500 per pair (£4,900 with matching stands) and £6,500 per pair respectively. Finish options are walnut, black or white, hand-finished in high-gloss piano lacquer. |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | My spouse is pregnant, and I saw the term "Apgar" in a pamphlet at the OB's office at a recent prenatal visit. I looked up the term online and found this article. Please explain Agpar and its significance to health outcomes. | Introduction
The Apgar score is based on five components (skin color, heart rate, reflex irritability, muscle tone, and respiration). Each item is scored from 0 to 2 with a total score of 7–10 as normal and the highest score of 10 representing the optimal condition. The Apgar score has been used worldwide as a vitality index for almost every newborn immediately after birth.
Methods
Study Population
We conducted a population-based cohort study using data from Danish national registers. A total of 2,272,473 live singletons were identified during 1978–2015 from the Danish Medical Birth Registry . We excluded 1,001 births with missing information on sex and 41,252 births with no valid information on 5-min Apgar scores (including the score of 0). We further excluded 16,398 infants who died or emigrated from Denmark before the age of 1 year. The final cohort comprised 2,213,822 births.
Discussion
Main Findings
In this large population-based cohort study, during childhood, we found individuals with even clinically “normal” Apgar score range of 7–9 still had higher risks of overall mental disorders and some specific diagnoses: organic disorders and a series of neurodevelopmental disorders (intellectual disability, pervasive developmental disorders, childhood autism, and ADHD). It is also interesting to observe that compromised Apgar scores were at elevated risks of developing organic disorders and neurotic disorders, which were reported for the first time. During early adulthood, compromised 5-min Apgar scores were not found to be associated with mental disorders.
Comparisons With Other Studies
To our knowledge, this is the first study to examine the association of the full spectrum of mental disorders with the 5-min Apgar score. Our findings indicate the strongest associations for intellectual disability in childhood, which corroborates the results from previous studies. Most previous studies were based on results of different non-standardized intelligence tests, and cross-sectional or descriptive designs, except a recent Swedish study, by virtue of clinically confirmed diagnosis and cohort design, reporting that term infants with low 5-min Apgar score had a higher risk of severe neurologic morbidity, including a 9-fold risk of intellectual disability. However, the Swedish study only captured cases before 14 years of age and only adjusted for year of birth, maternal age, parity, and smoking. Similarly, we observed 3~5-fold risks of intellectual disability in childhood (until 18 years of age), and we were able to adjust for not only the aforementioned confounders but also parental psychiatric history and socioeconomic status, indicating a more robust association. In addition, ADHD and autism were another two widely studied neurodevelopment disorders in relation to Apgar score during childhood, but existing results were inconsistent, which may be due to heterogeneity of methodology, in particular categorizations of Apgar scores and definition of outcomes. For example, some studies used pervasive developmental disorder (ICD-10 codes: F84) as a proxy to define autism, which includes but is not limited to autism. To reduce the possibility of misclassification, we only focused on childhood autism–the typical and most severe type of autism–to explore the association. Our findings further support that compromised 5-min Apgar scores were associated with childhood autism.
Furthermore, we observed that individuals with a compromised 5-min Apgar score had higher risks of organic disorder and neurotic disorder during childhood, which have not been reported previously. These findings imply that less-than-optimal Apgar scores at birth may be an indicator for a broad scope of mental disorders in childhood, not merely neurodevelopmental disorders.
There have been scarce studies examining the association between low Apgar score and adulthood mental health. We did not find that compromised 5-min Apgar scores were associated with mental disorders during early adulthood, which may be attributed to incomplete records of Apgar scores during the initial establishment of the Danish Medical Birth Register (MBR). We observed that participants with suboptimal Apgar scores at 5 min tended to have higher risks of organic disorders, schizophrenia, neurotic disorders, and personality disorders, and the low statistical precisions may probably be due to limited cases in the low Apgar score groups. Considering that the maximum attained age in our study was only up to 39 years, the follow-up between 19 and 39 years was not long enough to detect some late-onset mental disorders (e.g., dementia), therefore, future studies with extended follow-up to late adulthood are warranted.
Current guidelines recommend Apgar scores of 7 or higher to be reassuring, hence, infants with these scores are often assumed to constitute a homogeneous group. Nevertheless, recent studies showed that even reassuring Apgar scores of 7–9 are associated with higher risks of neonatal mortality, neonatal morbidity, and adverse long-term neurological outcomes, compared with an Apgar score of 10. We found a dose-response increasing the overall risk of mental disorders with decreasing Apgar score of 9 toward 7. Furthermore, individuals with “normal” scores of 7–9 carried increased risks of a wide range of neurodevelopmental disorders, such as intellectual disability, pervasive developmental disorders, childhood autism, and ADHD. Similarly, prior studies based on developmental screening scales found children aged 5 years with 5-min Apgar scores of 7–9 were more vulnerable on the emotional or physical health domain of the Early Development Instrument. Recently, a large transnational study also suggested that low Apgar scores of 7–9 were associated with a higher risk of autistic disorder but without controlling for socioeconomic status and paternal psychiatric history. Our findings are in line with those of previous studies by showing that reassuring Apgar scores 7–9 are associated with various neurodevelopmental disorders in childhood. These findings support that 5-min Apgar scores routinely available in contemporary neonatal settings, even within the normal range 7–9, are not totally reassuring.
The causes of mental disorders are multifactorial. Adverse prenatal events (e.g., gestational diabetes mellitus, preterm, and restricted fetal growth) are important risk factors and could have a programming effect on fetal brain development, resulting in increased risk for psychopathology later in life. In this study, adjusting for gestational age at birth and fetal growth status did not substantially change the risks, indicating that preterm birth or restricted fetal growth do not strongly modify the relations between low Apgar scores at birth and subsequent mental disorders. Although Apgar scores are not clear on any causal pathway of pathogenesis, less-than-optimal Apgar scores at birth may be a potential sign of the cumulative effect of those adverse prenatal events. Especially, the clinically reassuring but suboptimal score range 7–9 may indicate subtle but still detrimental intrauterine insults which will act negatively on fetal brain development. In clinical settings, a distressed infant will receive resuscitation well before the 5-min Apgar score is assigned, so the score 7–9 could not well reflect severe conditions prior to the assessment. That may be one of the reasons we observed exposure to the scores 7–9 was associated with an increased risk of mental disorders. In this study, a novel finding that a compromised 5-min Apgar score was linked to increased risks of organic disorder and neurotic disorder was reported. Organic disorder comprises a range of mental disorders based on a demonstrable etiology in cerebral disease, brain injury, or other insults leading to cerebral dysfunction. Increased risk of organic disorder with low Apgar score implies adverse prenatal insults (e.g., hypoxia-ischemia, white matter injury, reduced blood flow, malnutrition) exert a long-lasting impact on brain function in later life. With regard to neurotic disorder, its prevalence is relevant to low levels of socioeconomic status (SES). In this study, we found individuals with compromised Apgar scores tended to be born in families with worse SES (e.g., mothers live alone and have a low education level). It is, therefore, possible that SES factors at least partially mediate the observed association between compromised Apgar scores and neurotic disorder. | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
My spouse is pregnant, and I saw the term "Apgar" in a pamphlet at the OB's office at a recent prenatal visit. I looked up the term online and found this article. Please explain Agpar and its significance to health outcomes.
{passage 0}
==========
Introduction
The Apgar score is based on five components (skin color, heart rate, reflex irritability, muscle tone, and respiration). Each item is scored from 0 to 2 with a total score of 7–10 as normal and the highest score of 10 representing the optimal condition. The Apgar score has been used worldwide as a vitality index for almost every newborn immediately after birth.
Methods
Study Population
We conducted a population-based cohort study using data from Danish national registers. A total of 2,272,473 live singletons were identified during 1978–2015 from the Danish Medical Birth Registry . We excluded 1,001 births with missing information on sex and 41,252 births with no valid information on 5-min Apgar scores (including the score of 0). We further excluded 16,398 infants who died or emigrated from Denmark before the age of 1 year. The final cohort comprised 2,213,822 births.
Discussion
Main Findings
In this large population-based cohort study, during childhood, we found individuals with even clinically “normal” Apgar score range of 7–9 still had higher risks of overall mental disorders and some specific diagnoses: organic disorders and a series of neurodevelopmental disorders (intellectual disability, pervasive developmental disorders, childhood autism, and ADHD). It is also interesting to observe that compromised Apgar scores were at elevated risks of developing organic disorders and neurotic disorders, which were reported for the first time. During early adulthood, compromised 5-min Apgar scores were not found to be associated with mental disorders.
Comparisons With Other Studies
To our knowledge, this is the first study to examine the association of the full spectrum of mental disorders with the 5-min Apgar score. Our findings indicate the strongest associations for intellectual disability in childhood, which corroborates the results from previous studies. Most previous studies were based on results of different non-standardized intelligence tests, and cross-sectional or descriptive designs, except a recent Swedish study, by virtue of clinically confirmed diagnosis and cohort design, reporting that term infants with low 5-min Apgar score had a higher risk of severe neurologic morbidity, including a 9-fold risk of intellectual disability. However, the Swedish study only captured cases before 14 years of age and only adjusted for year of birth, maternal age, parity, and smoking. Similarly, we observed 3~5-fold risks of intellectual disability in childhood (until 18 years of age), and we were able to adjust for not only the aforementioned confounders but also parental psychiatric history and socioeconomic status, indicating a more robust association. In addition, ADHD and autism were another two widely studied neurodevelopment disorders in relation to Apgar score during childhood, but existing results were inconsistent, which may be due to heterogeneity of methodology, in particular categorizations of Apgar scores and definition of outcomes. For example, some studies used pervasive developmental disorder (ICD-10 codes: F84) as a proxy to define autism, which includes but is not limited to autism. To reduce the possibility of misclassification, we only focused on childhood autism–the typical and most severe type of autism–to explore the association. Our findings further support that compromised 5-min Apgar scores were associated with childhood autism.
Furthermore, we observed that individuals with a compromised 5-min Apgar score had higher risks of organic disorder and neurotic disorder during childhood, which have not been reported previously. These findings imply that less-than-optimal Apgar scores at birth may be an indicator for a broad scope of mental disorders in childhood, not merely neurodevelopmental disorders.
There have been scarce studies examining the association between low Apgar score and adulthood mental health. We did not find that compromised 5-min Apgar scores were associated with mental disorders during early adulthood, which may be attributed to incomplete records of Apgar scores during the initial establishment of the Danish Medical Birth Register (MBR). We observed that participants with suboptimal Apgar scores at 5 min tended to have higher risks of organic disorders, schizophrenia, neurotic disorders, and personality disorders, and the low statistical precisions may probably be due to limited cases in the low Apgar score groups. Considering that the maximum attained age in our study was only up to 39 years, the follow-up between 19 and 39 years was not long enough to detect some late-onset mental disorders (e.g., dementia), therefore, future studies with extended follow-up to late adulthood are warranted.
Current guidelines recommend Apgar scores of 7 or higher to be reassuring, hence, infants with these scores are often assumed to constitute a homogeneous group. Nevertheless, recent studies showed that even reassuring Apgar scores of 7–9 are associated with higher risks of neonatal mortality, neonatal morbidity, and adverse long-term neurological outcomes, compared with an Apgar score of 10. We found a dose-response increasing the overall risk of mental disorders with decreasing Apgar score of 9 toward 7. Furthermore, individuals with “normal” scores of 7–9 carried increased risks of a wide range of neurodevelopmental disorders, such as intellectual disability, pervasive developmental disorders, childhood autism, and ADHD. Similarly, prior studies based on developmental screening scales found children aged 5 years with 5-min Apgar scores of 7–9 were more vulnerable on the emotional or physical health domain of the Early Development Instrument. Recently, a large transnational study also suggested that low Apgar scores of 7–9 were associated with a higher risk of autistic disorder but without controlling for socioeconomic status and paternal psychiatric history. Our findings are in line with those of previous studies by showing that reassuring Apgar scores 7–9 are associated with various neurodevelopmental disorders in childhood. These findings support that 5-min Apgar scores routinely available in contemporary neonatal settings, even within the normal range 7–9, are not totally reassuring.
The causes of mental disorders are multifactorial. Adverse prenatal events (e.g., gestational diabetes mellitus, preterm, and restricted fetal growth) are important risk factors and could have a programming effect on fetal brain development, resulting in increased risk for psychopathology later in life. In this study, adjusting for gestational age at birth and fetal growth status did not substantially change the risks, indicating that preterm birth or restricted fetal growth do not strongly modify the relations between low Apgar scores at birth and subsequent mental disorders. Although Apgar scores are not clear on any causal pathway of pathogenesis, less-than-optimal Apgar scores at birth may be a potential sign of the cumulative effect of those adverse prenatal events. Especially, the clinically reassuring but suboptimal score range 7–9 may indicate subtle but still detrimental intrauterine insults which will act negatively on fetal brain development. In clinical settings, a distressed infant will receive resuscitation well before the 5-min Apgar score is assigned, so the score 7–9 could not well reflect severe conditions prior to the assessment. That may be one of the reasons we observed exposure to the scores 7–9 was associated with an increased risk of mental disorders. In this study, a novel finding that a compromised 5-min Apgar score was linked to increased risks of organic disorder and neurotic disorder was reported. Organic disorder comprises a range of mental disorders based on a demonstrable etiology in cerebral disease, brain injury, or other insults leading to cerebral dysfunction. Increased risk of organic disorder with low Apgar score implies adverse prenatal insults (e.g., hypoxia-ischemia, white matter injury, reduced blood flow, malnutrition) exert a long-lasting impact on brain function in later life. With regard to neurotic disorder, its prevalence is relevant to low levels of socioeconomic status (SES). In this study, we found individuals with compromised Apgar scores tended to be born in families with worse SES (e.g., mothers live alone and have a low education level). It is, therefore, possible that SES factors at least partially mediate the observed association between compromised Apgar scores and neurotic disorder.
https://www.frontiersin.org/journals/medicine/articles/10.3389/fmed.2021.796544/full#F1 |
Use only the information provided in the prompt to answer questions. Do not use any prior knowledge or external sources. List the response without numbers or bullet points. Restate the question as an introductory sentence. No other text in the response. | What are the conditions that have a differential diagnosis of angina pectoris? |
Diffuse Esophageal Spasm
■ Essentials of Diagnosis
• Dysphagia, noncardiac chest pain, hypersalivation, reflux of
recently ingested food
• May be precipitated by ingestion of hot or cold foods
• Endoscopic, radiographic, and manometric demonstration of nonpropulsive hyperperistalsis; lower esophageal sphincter relaxes
normally
• “Nutcracker esophagus” variant with prolonged, high-pressure
(> 175 mm Hg) propulsive contractions
■ Differential Diagnosis
• Angina pectoris
• Esophageal or mediastinal tumors
• Aperistalsis
• Achalasia
• Psychiatric disease
■ Treatment
• Trial of acid suppression.
• Calcium channel blockers such as nifedipine or diltiazem in combination with nitrates often effective. For patient failing to respond,
possible role for sildenafil, botulinum toxin.
• Trazodone or tricyclic antidepressants for substernal pain
■ Pearl
This condition may be indistinguishable from myocardial ischemia;
exclude that possibility before investigating the esophagus.
Reference
Grübel C, Borovicka J, Schwizer W, Fox M, Hebbard G. Diffuse esophageal
spasm. Am J Gastroenterol 2008;103:450. [PMID: 18005367]
Chapter 3 Gastrointestinal Diseases 77
3
Disaccharidase (Lactase) Deficiency
■ Essentials of Diagnosis
• Common in Asians and blacks, in whom lactase enzyme deficiency is nearly ubiquitous and begins in childhood; can also be
acquired temporarily after gastroenteritis of other causes
• Symptoms vary from abdominal bloating, distention, cramps, and flatulence to explosive diarrhea in response to disaccharide ingestion
• Stool pH < 5.5; reducing substances present in stool
• Abnormal lactose hydrogen breath test, resolution of symptoms
on lactose-free diet, or flat glucose response to disaccharide loading suggests the diagnosis
■ Differential Diagnosis
• Chronic mucosal malabsorptive disorders
• Irritable bowel syndrome
• Celiac sprue
• Small intestinal bacterial overgrowth
• Inflammatory bowel disease
• Pancreatic insufficiency
• Giardiasis
• Excess artificial sweetener use
■ Treatment
• Restriction of dietary lactose; usually happens by experience in
affected minorities from early life
• Lactase enzyme supplementation
• Maintenance of adequate nutritional and calcium intake
■ Pearl
Consider this in undiagnosed diarrhea; the patient may not be aware
of ingesting lactose-containing foods.
Foreign Bodies in the Esophagus
■ Essentials of Diagnosis
• Most common in children, edentulous older patients, and the
severely mentally impaired
• Occurs at physiologic areas of narrowing (upper esophageal
sphincter, the level of the aortic arch, or the diaphragmatic hiatus)
• Other predisposing factors favoring impaction include Zenker’s
diverticulum, webs, achalasia, peptic strictures, or malignancy
• Recent ingestion of food or foreign material (coins most commonly in children, meat bolus most common in adults), but the
history may be missing
• Vague discomfort in chest or neck, dysphagia, inability to handle
secretions, odynophagia, hypersalivation, and stridor or dyspnea
in children
• Radiographic or endoscopic evidence of esophageal obstruction
by foreign body
■ Differential Diagnosis
• Esophageal stricture
• Eosinophilic esophagitis
• Esophageal or mediastinal tumor
• Angina pectoris
■ Treatment
• Endoscopic removal with airway protection as needed and the
use of an overtube if sharp objects are present
• Emergent endoscopy should be used for sharp objects, disk batteries (secondary to risk of perforation due to their caustic nature),
or evidence of the inability to handle secretions; objects retained
in the esophagus should be removed within 24 hours of ingestion
• Endoscopy is successful in > 90% of cases; avoid barium studies
before endoscopy, as they impair visualization
■ Pearl
Treatment is ordinarily straightforward; diagnosis may not be, especially
in the very young and very old.
Reference
Eisen GM, Baron TH, Dominitz JA, et al; American Society for Gastrointestinal
Endoscopy. Guideline for the management of ingested foreign bodies.
Gastrointest Endosc 2002;55:802. [PMID: 12024131]
Gastritis
■ Essentials of Diagnosis
• May be acute (erosive) or indolent (atrophic); multiple varied causes
• Symptoms often vague and include nausea, vomiting, anorexia,
nondescript upper abdominal distress
• Mild epigastric tenderness to palpation; in some, physical signs
absent
• Iron deficiency anemia not unusual
• Endoscopy with gastric biopsy for definitive diagnosis
• Multiple associations include stress and diminished mucosal blood
flow (burns, sepsis, critical illness), drugs (NSAIDs, salicylates),
atrophic states (aging, pernicious anemia), previous surgery (gastrectomy, Billroth II), H. pylori infection, acute or chronic alcoholism
■ Differential Diagnosis
• Peptic ulcer
• Hiatal hernia
• Malignancy of stomach or pancreas
• Cholecystitis
• Ischemic cardiac disease
■ Treatment
• Avoidance of alcohol, caffeine, salicylates, tobacco, and NSAIDs
• Investigate for presence of H. pylori; eradicate if present
• Proton pump inhibitors in patients receiving oral feedings,
H2 inhibitors, or sucralfate
• Prevention in high-risk patients (eg, intensive care setting) using
these same agents
■ Pearl
Ninety-five percent of gastroenterologists and a high proportion of other
health care workers carry H. pylori.
Reference
El-Zimaity H. Gastritis and gastric atrophy. Curr Opin Gastroenterol 2008;24:682.
[PMID: 19122515]
84 Current Essentials of Medicine
3
Gastroesophageal Reflux Disease (GERD)
■ Essentials of Diagnosis
• Substernal burning (pyrosis) or pressure, aggravated by recumbency and relieved with sitting; can cause dysphagia, odynophagia,
atypical chest pain; proton pump inhibitor may be diagnostic and
therapeutic; further testing when diagnosis unclear, symptoms
refractory
• Reflux, hiatal hernia may be found at barium study
• Incompetent lower esophageal sphincter (LES); endoscopy with
biopsy may be necessary to exclude other diagnoses
• Esophageal pH helpful during symptoms
• Diminished LES tone also seen in obesity, pregnancy, hiatal
hernia, nasogastric tube
■ Differential Diagnosis
• Peptic ulcer disease
• Angina pectoris
• Achalasia, esophageal spasm, pill esophagitis
■ Treatment
• Weight loss, avoidance of late-night meals, elevation of head of bed
• Avoid chocolate, caffeine, tobacco, alcohol
• High-dose H2 blockers or proton pump inhibitors
• Surgical fundoplication for patients intolerant or allergic to medical therapy or refractory cases with predominantly regurgitation
or nonacid reflux; use caution in patients whose primary complaint
is heartburn and who are found to have nonerosive GER, as these
patients likely have a component of visceral hypersensitivity that
may be exacerbated by surgery.
■ Pearl
Eradication of H. pylori may actually worsen GERD; the gastric acid
secretion increases upon eradication of the bacterium.
Reference
Fass R. Proton pump inhibitor failure: what are the therapeutic options? Am J
Gastroenterol 2009;104(suppl):S33. [PMID: 19262545]
Chapter 3 Gastrointestinal Diseases 85
3
Intestinal Tuberculosis
■ Essentials of Diagnosis
• Chronic abdominal pain, anorexia, bloating; weight loss, fever,
diarrhea, new-onset ascites in many
• Mild right lower quadrant tenderness, as ileocecal area is the most
commonly involved intestinal site; fistula-in-ano sometimes seen
• Barium study may reveal mucosal ulcerations or scarring and
fibrosis with narrowing of the small or large intestine
• In peritonitis, ascitic fluid has high protein and mononuclear pleocytosis; peritoneal biopsy with granulomas is more sensitive than
ascites AFB culture; high adenosine deaminase levels in ascitic
fluid may suggest the diagnosis; TB peritonitis more common in
those with immune compromise
• Complications include intestinal obstruction, hemorrhage, fistula
formation, and bacterial overgrowth with malabsorption
■ Differential Diagnosis
• Carcinoma of the colon or small bowel
• Inflammatory bowel disease: Crohn’s disease
• Ameboma or Yersinia infection
• Intestinal lymphoma or amyloidosis
• Ovarian or peritoneal carcinomatosis
• Mycobacterium avium-intracellulare infection
■ Treatment
• Standard therapy for tuberculosis; as infection heals, the affected
bowel may develop stricture
■ Pearl
Seen uncommonly in the developed world, but experienced clinicians
have long noted that exploratory laparotomy for suspected small bowel
obstruction relieves symptoms without antituberculous therapy.
Reference
Donoghue HD, Holton J. Intestinal tuberculosis. Curr Opin Infect Dis
2009;22:490. [PMID: 19623062]
Carpal Tunnel Syndrome
■ Essentials of Diagnosis
• The most common entrapment neuropathy, caused by compression of the median nerve (which innervates the flexor muscles of
the wrist and fingers)
• Middle-aged women and those with a history of repetitive use of
the hands commonly affected
• Pain classically worse at night (sleep with hands curled into the
body) and exacerbated by hand movement
• Initial symptoms of pain or paresthesias in thumb, index, middle, and
lateral part of ring finger; progression to thenar eminence wasting
• Pain radiation to forearm, shoulder, neck, chest, or other fingers
of the hand not uncommon
• Positive Tinel’s sign
• Usually idiopathic; in bilateral onset consider secondary causes
including rheumatoid arthritis, amyloidosis, sarcoidosis, hypothyroidism, diabetes, pregnancy, acromegaly, gout
• Diagnosis is primarily clinical; detection of deficits by electrodiagnostic testing (assessing nerve conduction velocity) very helpful to guide referral for surgical release
■ Differential Diagnosis
• C6 or C7 cervical radiculopathy
• Thoracic outlet syndrome leading to brachial plexus neuropathy
• Mononeuritis multiplex
• Syringomyelia
• Multiple sclerosis
• Angina pectoris, especially when left-sided
■ Treatment
• Conservative measures initially, including hand rest, nocturnal
splinting of wrists, anti-inflammatory medications
• Steroid injection into the carpal tunnel occasionally
• Surgical decompression in a few who have nerve conduction
abnormalities; best done before development of thenar atrophy
■ Pearl
Carpal tunnel affects the radial three and one-half fingers, myocardial
ischemia the ulnar one and one-half; remember this in evaluating arm
pain—and hope it’s the right arm.
Reference
Dahlin LB, Salö M, Thomsen N, Stütz N. Carpal tunnel syndrome and treatment
of recurrent symptoms. Scand J Plast Reconstr Surg Hand Surg 2010;44:4.
[PMID: 20136467] | Use only the information provided in the prompt to answer questions. Do not use any prior knowledge or external sources. List the response without numbers or bullet points. Restate the question as an introductory sentence. No other text in the response. What are the conditions that have a differential diagnosis of angina pectoris?
Diffuse Esophageal Spasm
■ Essentials of Diagnosis
• Dysphagia, noncardiac chest pain, hypersalivation, reflux of
recently ingested food
• May be precipitated by ingestion of hot or cold foods
• Endoscopic, radiographic, and manometric demonstration of nonpropulsive hyperperistalsis; lower esophageal sphincter relaxes
normally
• “Nutcracker esophagus” variant with prolonged, high-pressure
(> 175 mm Hg) propulsive contractions
■ Differential Diagnosis
• Angina pectoris
• Esophageal or mediastinal tumors
• Aperistalsis
• Achalasia
• Psychiatric disease
■ Treatment
• Trial of acid suppression.
• Calcium channel blockers such as nifedipine or diltiazem in combination with nitrates often effective. For patient failing to respond,
possible role for sildenafil, botulinum toxin.
• Trazodone or tricyclic antidepressants for substernal pain
■ Pearl
This condition may be indistinguishable from myocardial ischemia;
exclude that possibility before investigating the esophagus.
Reference
Grübel C, Borovicka J, Schwizer W, Fox M, Hebbard G. Diffuse esophageal
spasm. Am J Gastroenterol 2008;103:450. [PMID: 18005367]
Chapter 3 Gastrointestinal Diseases 77
3
Disaccharidase (Lactase) Deficiency
■ Essentials of Diagnosis
• Common in Asians and blacks, in whom lactase enzyme deficiency is nearly ubiquitous and begins in childhood; can also be
acquired temporarily after gastroenteritis of other causes
• Symptoms vary from abdominal bloating, distention, cramps, and flatulence to explosive diarrhea in response to disaccharide ingestion
• Stool pH < 5.5; reducing substances present in stool
• Abnormal lactose hydrogen breath test, resolution of symptoms
on lactose-free diet, or flat glucose response to disaccharide loading suggests the diagnosis
■ Differential Diagnosis
• Chronic mucosal malabsorptive disorders
• Irritable bowel syndrome
• Celiac sprue
• Small intestinal bacterial overgrowth
• Inflammatory bowel disease
• Pancreatic insufficiency
• Giardiasis
• Excess artificial sweetener use
■ Treatment
• Restriction of dietary lactose; usually happens by experience in
affected minorities from early life
• Lactase enzyme supplementation
• Maintenance of adequate nutritional and calcium intake
■ Pearl
Consider this in undiagnosed diarrhea; the patient may not be aware
of ingesting lactose-containing foods.
Foreign Bodies in the Esophagus
■ Essentials of Diagnosis
• Most common in children, edentulous older patients, and the
severely mentally impaired
• Occurs at physiologic areas of narrowing (upper esophageal
sphincter, the level of the aortic arch, or the diaphragmatic hiatus)
• Other predisposing factors favoring impaction include Zenker’s
diverticulum, webs, achalasia, peptic strictures, or malignancy
• Recent ingestion of food or foreign material (coins most commonly in children, meat bolus most common in adults), but the
history may be missing
• Vague discomfort in chest or neck, dysphagia, inability to handle
secretions, odynophagia, hypersalivation, and stridor or dyspnea
in children
• Radiographic or endoscopic evidence of esophageal obstruction
by foreign body
■ Differential Diagnosis
• Esophageal stricture
• Eosinophilic esophagitis
• Esophageal or mediastinal tumor
• Angina pectoris
■ Treatment
• Endoscopic removal with airway protection as needed and the
use of an overtube if sharp objects are present
• Emergent endoscopy should be used for sharp objects, disk batteries (secondary to risk of perforation due to their caustic nature),
or evidence of the inability to handle secretions; objects retained
in the esophagus should be removed within 24 hours of ingestion
• Endoscopy is successful in > 90% of cases; avoid barium studies
before endoscopy, as they impair visualization
■ Pearl
Treatment is ordinarily straightforward; diagnosis may not be, especially
in the very young and very old.
Reference
Eisen GM, Baron TH, Dominitz JA, et al; American Society for Gastrointestinal
Endoscopy. Guideline for the management of ingested foreign bodies.
Gastrointest Endosc 2002;55:802. [PMID: 12024131]
Gastritis
■ Essentials of Diagnosis
• May be acute (erosive) or indolent (atrophic); multiple varied causes
• Symptoms often vague and include nausea, vomiting, anorexia,
nondescript upper abdominal distress
• Mild epigastric tenderness to palpation; in some, physical signs
absent
• Iron deficiency anemia not unusual
• Endoscopy with gastric biopsy for definitive diagnosis
• Multiple associations include stress and diminished mucosal blood
flow (burns, sepsis, critical illness), drugs (NSAIDs, salicylates),
atrophic states (aging, pernicious anemia), previous surgery (gastrectomy, Billroth II), H. pylori infection, acute or chronic alcoholism
■ Differential Diagnosis
• Peptic ulcer
• Hiatal hernia
• Malignancy of stomach or pancreas
• Cholecystitis
• Ischemic cardiac disease
■ Treatment
• Avoidance of alcohol, caffeine, salicylates, tobacco, and NSAIDs
• Investigate for presence of H. pylori; eradicate if present
• Proton pump inhibitors in patients receiving oral feedings,
H2 inhibitors, or sucralfate
• Prevention in high-risk patients (eg, intensive care setting) using
these same agents
■ Pearl
Ninety-five percent of gastroenterologists and a high proportion of other
health care workers carry H. pylori.
Reference
El-Zimaity H. Gastritis and gastric atrophy. Curr Opin Gastroenterol 2008;24:682.
[PMID: 19122515]
84 Current Essentials of Medicine
3
Gastroesophageal Reflux Disease (GERD)
■ Essentials of Diagnosis
• Substernal burning (pyrosis) or pressure, aggravated by recumbency and relieved with sitting; can cause dysphagia, odynophagia,
atypical chest pain; proton pump inhibitor may be diagnostic and
therapeutic; further testing when diagnosis unclear, symptoms
refractory
• Reflux, hiatal hernia may be found at barium study
• Incompetent lower esophageal sphincter (LES); endoscopy with
biopsy may be necessary to exclude other diagnoses
• Esophageal pH helpful during symptoms
• Diminished LES tone also seen in obesity, pregnancy, hiatal
hernia, nasogastric tube
■ Differential Diagnosis
• Peptic ulcer disease
• Angina pectoris
• Achalasia, esophageal spasm, pill esophagitis
■ Treatment
• Weight loss, avoidance of late-night meals, elevation of head of bed
• Avoid chocolate, caffeine, tobacco, alcohol
• High-dose H2 blockers or proton pump inhibitors
• Surgical fundoplication for patients intolerant or allergic to medical therapy or refractory cases with predominantly regurgitation
or nonacid reflux; use caution in patients whose primary complaint
is heartburn and who are found to have nonerosive GER, as these
patients likely have a component of visceral hypersensitivity that
may be exacerbated by surgery.
■ Pearl
Eradication of H. pylori may actually worsen GERD; the gastric acid
secretion increases upon eradication of the bacterium.
Reference
Fass R. Proton pump inhibitor failure: what are the therapeutic options? Am J
Gastroenterol 2009;104(suppl):S33. [PMID: 19262545]
Chapter 3 Gastrointestinal Diseases 85
3
Intestinal Tuberculosis
■ Essentials of Diagnosis
• Chronic abdominal pain, anorexia, bloating; weight loss, fever,
diarrhea, new-onset ascites in many
• Mild right lower quadrant tenderness, as ileocecal area is the most
commonly involved intestinal site; fistula-in-ano sometimes seen
• Barium study may reveal mucosal ulcerations or scarring and
fibrosis with narrowing of the small or large intestine
• In peritonitis, ascitic fluid has high protein and mononuclear pleocytosis; peritoneal biopsy with granulomas is more sensitive than
ascites AFB culture; high adenosine deaminase levels in ascitic
fluid may suggest the diagnosis; TB peritonitis more common in
those with immune compromise
• Complications include intestinal obstruction, hemorrhage, fistula
formation, and bacterial overgrowth with malabsorption
■ Differential Diagnosis
• Carcinoma of the colon or small bowel
• Inflammatory bowel disease: Crohn’s disease
• Ameboma or Yersinia infection
• Intestinal lymphoma or amyloidosis
• Ovarian or peritoneal carcinomatosis
• Mycobacterium avium-intracellulare infection
■ Treatment
• Standard therapy for tuberculosis; as infection heals, the affected
bowel may develop stricture
Carpal Tunnel Syndrome
■ Essentials of Diagnosis
• The most common entrapment neuropathy, caused by compression of the median nerve (which innervates the flexor muscles of
the wrist and fingers)
• Middle-aged women and those with a history of repetitive use of
the hands commonly affected
• Pain classically worse at night (sleep with hands curled into the
body) and exacerbated by hand movement
• Initial symptoms of pain or paresthesias in thumb, index, middle, and
lateral part of ring finger; progression to thenar eminence wasting
• Pain radiation to forearm, shoulder, neck, chest, or other fingers
of the hand not uncommon
• Positive Tinel’s sign
• Usually idiopathic; in bilateral onset consider secondary causes
including rheumatoid arthritis, amyloidosis, sarcoidosis, hypothyroidism, diabetes, pregnancy, acromegaly, gout
• Diagnosis is primarily clinical; detection of deficits by electrodiagnostic testing (assessing nerve conduction velocity) very helpful to guide referral for surgical release
■ Differential Diagnosis
• C6 or C7 cervical radiculopathy
• Thoracic outlet syndrome leading to brachial plexus neuropathy
• Mononeuritis multiplex
• Syringomyelia
• Multiple sclerosis
• Angina pectoris, especially when left-sided
■ Treatment
• Conservative measures initially, including hand rest, nocturnal
splinting of wrists, anti-inflammatory medications
• Steroid injection into the carpal tunnel occasionally
• Surgical decompression in a few who have nerve conduction
abnormalities; best done before development of thenar atrophy
■ Pearl
Carpal tunnel affects the radial three and one-half fingers, myocardial
ischemia the ulnar one and one-half; remember this in evaluating arm
pain—and hope it’s the right arm.
Reference
Dahlin LB, Salö M, Thomsen N, Stütz N. Carpal tunnel syndrome and treatment
of recurrent symptoms. Scand J Plast Reconstr Surg Hand Surg 2010;44:4.
[PMID: 20136467] |
Use information present in the text to support your response. Do not use outside information. | What does the attached document have to say about the prognosis of someone diagnosed with malignant MS? | **What is multiple sclerosis?**
Multiple sclerosis (MS) is the most common disabling neurological disease of young adults with symptom onset generally occurring between the ages of 20 to 40 years.
In MS, the immune system cells that normally protect us from viruses, bacteria, and unhealthy cells mistakenly attack myelin in the central nervous system (brain, optic nerves, and spinal cord). Myelin is a substance that makes up the protective sheath (myelin sheath) that coats nerve fibers (axons).
MS is a chronic disease that affects people differently. A small number of people with MS will have a mild course with little to no disability, whereas others will have a steadily worsening disease that leads to increased disability over time. Most people with MS, however, will have short periods of symptoms followed by long stretches of relative quiescence (inactivity or dormancy), with partial or full recovery. The disease is rarely fatal and most people with MS have a normal life expectancy.
Myelin and the immune system
MS attacks axons in the central nervous system protected by myelin, which are commonly called white matter. MS also damages the nerve cell bodies, which are found in the brain's gray matter, as well as the axons themselves in the brain, spinal cord, and optic nerves that transmit visual information from the eye to the brain. As the disease progresses, the outermost layer of the brain, called the cerebral cortex, shrinks in a process known as cortical atrophy.
The term multiple sclerosis refers to the distinctive areas of scar tissue (sclerosis—also called plaques or lesions) that result from the attack on myelin by the immune system. These plaques are visible using magnetic resonance imaging (MRI). Plaques can be as small as a pinhead or as large as a golf ball.
The symptoms of MS depend on the severity of the inflammatory reaction as well as the location and extent of the plaques, which primarily appear in the brain stem, cerebellum (involved with balance and coordination of movement, among other functions), spinal cord, optic nerves, and the white matter around the brain ventricles (fluid-filled cavaties).
Signs and symptoms of MS
The natural course of MS is different for each person, which makes it difficult to predict. The onset and duration of MS symptoms usually depend on the specific type but may begin over a few days and go away quickly or develop more slowly and gradually over many years.
There are four main types of MS, named according to the progression of symptoms over time:
Relapsing-remitting MS—Symptoms in this type come in the form of attacks. In between attacks, people recover or return to their usual level of disability. When symptoms occur in this form of MS, it is called an attack, a relapse, or exacerbation. The periods of disease inactivity between MS attacks are referred to as remission. Weeks, months, or even years may pass before another attack occurs, followed again by a period of inactivity. Most people with MS are initially diagnosed with this form of the disease.
Secondary-progressive MS—People with this form of MS usually have had a previous history of MS attacks but then start to develop gradual and steady symptoms and deterioration in their function over time. Most individuals with severe relapsing-remitting MS may go on to develop secondary progressive MS if they are untreated.
Primary-progressive MS—This type of MS is less common and is characterized by progressively worsening symptoms from the beginning with no noticeable relapses or exacerbations of the disease, although there may be temporary or minor relief from symptoms.
Progressive-relapsing MS—The rarest form of MS is characterized by a steady worsening of symptoms from the beginning with acute relapses that can occur over time during the disease course.
There are some rare and unusual variants of MS, such as:
Marburg variant MS (also known as malignant MS) causes swift and relentless symptoms and decline in function, and may result in significant disability or even death shortly after disease onset.
Balo's concentric sclerosis causes concentric rings of myelin destruction that can be seen on an MRI and is another variant type of MS that can progress rapidly.
Early MS symptoms often include:
Vision problems such as blurred or double vision, or optic neuritis, which causes pain with eye movement and rapid vision loss
Muscle weakness, often in the hands and legs, and muscle stiffness accompanied by painful muscle spasms
Tingling, numbness, or pain in the arms, legs, trunk, or face
Clumsiness, especially difficulty staying balanced when walking
Bladder control problems
Intermittent or constant dizziness
MS may also cause later symptoms, such as:
Mental or physical fatigue which accompanies the early symptoms during an attack
Mood changes such as depression or difficulty with emotional expression or control
Cognitive dysfunction—problems concentrating, multitasking, thinking, learning, or difficulties with memory or judgment
Muscle weakness, stiffness, and spasms may be severe enough to affect walking or standing. In some cases, MS leads to partial or complete paralysis and the use of a wheelchair is not uncommon, particularly in individuals who are untreated or have advanced disease. Many people with MS find that weakness and fatigue are worse when they have a fever or when they are exposed to heat. MS exacerbations may occur following common infections.
Pain is rarely the first sign of MS but pain often occurs with optic neuritis and trigeminal neuralgia, a disorder that affects one of the nerves that provides sensation to different parts of the face. Painful limb spasms and sharp pain shooting down the legs or around the abdomen can also be symptoms of MS.
Genetic susceptibility
MS itself is not inherited, but susceptibility to MS may be inherited. Studies show that some individuals with MS have one or more family member or relative who also have MS.
Current research suggests that dozens of genes and possibly hundreds of variations in the genetic code (gene variants) combine to create vulnerability to MS. Some of these genes have been identified, and most are associated with functions of the immune system. Many of the known genes are similar to those that have been identified in people with other autoimmune diseases as type 1 diabetes, rheumatoid arthritis, or lupus.
Infectious factors and viruses
Several viruses have been found in people with MS, but the virus most consistently linked to the development of MS is the Epstein-Barr virus (EBV) which causes infectious mononucleosis.
Only about five percent of the population has not been infected by EBV. These individuals are at a lower risk for developing MS than those who have been infected. People who were infected with EBV in adolescence or adulthood, and who therefore develop an exaggerated immune response to EBV, are at a significantly higher risk for developing MS than those who were infected in early childhood. This suggests that it may be the type of immune response to EBV that may lead to MS, rather than EBV infection itself. However, there is still no proof that EBV causes MS and the mechanisms that underlie this process are poorly understood.
Environmental factors
Several studies indicate that people who spend more time in the sun and those with relatively higher levels of vitamin D are less likely to develop MS or have a less severe course of disease and fewer relapses. Bright sunlight helps human skin produce vitamin D. Researchers believe that vitamin D may help regulate the immune system in ways that reduce the risk of MS or autoimmunity in general. People from regions near the equator, where there is a great deal of bright sunlight, generally have a much lower risk of MS than people from temperate areas such as the U.S. and Canada.
Studies have found that people who smoke are more likely to develop MS and have a more aggressive disease course. Indeed, people who smoke tend to have more brain lesions and brain shrinkage than non-smokers.
How is multiple sclerosis diagnosed and treated?
Diagnosing MS
There is no single test used to diagnose MS. The disease is confirmed when symptoms and signs develop and are related to different parts of the nervous system at more than one interval and after other alternative diagnoses have been excluded.
Doctors use different tests to rule out or confirm the diagnosis. In addition to a complete medical history, physical examination, and a detailed neurological examination, a doctor may recommend:
MRI scans of the brain and spinal cord to look for the characteristic lesions of MS. A special dye or contrast agent may be injected into a vein to enhance brain images of the active MS lesions.
Lumbar puncture (sometimes called a spinal tap) to obtain a sample of cerebrospinal fluid and examine it for proteins and inflammatory cells associated with the disease. Spinal tap analysis also can rule out diseases that may look like MS.
Evoked potential tests, which use electrodes placed on the skin and painless electric signals to measure how quickly and accurately the nervous system responds to stimulation.
Treating MS
There is no cure for MS, but there are treatments that can reduce the number and severity of relapses and delay the long-term disability progression of the disease.
Corticosteroids, such as intravenous (infused into a vein) methylprednisolone, are prescribed over the course of three to five days. Intravenous steroids quickly and potently suppress the immune system and reduce inflammation. They may be followed by a tapered dose of oral corticosteroids. Clinical trials have shown that these drugs hasten recovery from MS attacks, but do not alter the long-term outcome of the disease.
Plasma exchange (plasmapheresis) can treat severe flare-ups in people with relapsing forms of MS who do not have a good response to methylprednisolone. Plasma exchange involves taking blood out of the body and removing components in the blood's plasma that are thought to be harmful. The rest of the blood, plus replacement plasma, is then transfused back into the body. This treatment has not been shown to be effective for secondary progressive or chronic progressive MS.
Disease-modifying treatments
Current therapies approved by the U.S. Food and Drug Administration (FDA) for MS are designed to modulate or suppress the inflammatory reactions of the disease. They are most effective for relapsing-remitting MS at early stages of the disease. | [Article]
==========
**What is multiple sclerosis?**
Multiple sclerosis (MS) is the most common disabling neurological disease of young adults with symptom onset generally occurring between the ages of 20 to 40 years.
In MS, the immune system cells that normally protect us from viruses, bacteria, and unhealthy cells mistakenly attack myelin in the central nervous system (brain, optic nerves, and spinal cord). Myelin is a substance that makes up the protective sheath (myelin sheath) that coats nerve fibers (axons).
MS is a chronic disease that affects people differently. A small number of people with MS will have a mild course with little to no disability, whereas others will have a steadily worsening disease that leads to increased disability over time. Most people with MS, however, will have short periods of symptoms followed by long stretches of relative quiescence (inactivity or dormancy), with partial or full recovery. The disease is rarely fatal and most people with MS have a normal life expectancy.
Myelin and the immune system
MS attacks axons in the central nervous system protected by myelin, which are commonly called white matter. MS also damages the nerve cell bodies, which are found in the brain's gray matter, as well as the axons themselves in the brain, spinal cord, and optic nerves that transmit visual information from the eye to the brain. As the disease progresses, the outermost layer of the brain, called the cerebral cortex, shrinks in a process known as cortical atrophy.
The term multiple sclerosis refers to the distinctive areas of scar tissue (sclerosis—also called plaques or lesions) that result from the attack on myelin by the immune system. These plaques are visible using magnetic resonance imaging (MRI). Plaques can be as small as a pinhead or as large as a golf ball.
The symptoms of MS depend on the severity of the inflammatory reaction as well as the location and extent of the plaques, which primarily appear in the brain stem, cerebellum (involved with balance and coordination of movement, among other functions), spinal cord, optic nerves, and the white matter around the brain ventricles (fluid-filled cavaties).
Signs and symptoms of MS
The natural course of MS is different for each person, which makes it difficult to predict. The onset and duration of MS symptoms usually depend on the specific type but may begin over a few days and go away quickly or develop more slowly and gradually over many years.
There are four main types of MS, named according to the progression of symptoms over time:
Relapsing-remitting MS—Symptoms in this type come in the form of attacks. In between attacks, people recover or return to their usual level of disability. When symptoms occur in this form of MS, it is called an attack, a relapse, or exacerbation. The periods of disease inactivity between MS attacks are referred to as remission. Weeks, months, or even years may pass before another attack occurs, followed again by a period of inactivity. Most people with MS are initially diagnosed with this form of the disease.
Secondary-progressive MS—People with this form of MS usually have had a previous history of MS attacks but then start to develop gradual and steady symptoms and deterioration in their function over time. Most individuals with severe relapsing-remitting MS may go on to develop secondary progressive MS if they are untreated.
Primary-progressive MS—This type of MS is less common and is characterized by progressively worsening symptoms from the beginning with no noticeable relapses or exacerbations of the disease, although there may be temporary or minor relief from symptoms.
Progressive-relapsing MS—The rarest form of MS is characterized by a steady worsening of symptoms from the beginning with acute relapses that can occur over time during the disease course.
There are some rare and unusual variants of MS, such as:
Marburg variant MS (also known as malignant MS) causes swift and relentless symptoms and decline in function, and may result in significant disability or even death shortly after disease onset.
Balo's concentric sclerosis causes concentric rings of myelin destruction that can be seen on an MRI and is another variant type of MS that can progress rapidly.
Early MS symptoms often include:
Vision problems such as blurred or double vision, or optic neuritis, which causes pain with eye movement and rapid vision loss
Muscle weakness, often in the hands and legs, and muscle stiffness accompanied by painful muscle spasms
Tingling, numbness, or pain in the arms, legs, trunk, or face
Clumsiness, especially difficulty staying balanced when walking
Bladder control problems
Intermittent or constant dizziness
MS may also cause later symptoms, such as:
Mental or physical fatigue which accompanies the early symptoms during an attack
Mood changes such as depression or difficulty with emotional expression or control
Cognitive dysfunction—problems concentrating, multitasking, thinking, learning, or difficulties with memory or judgment
Muscle weakness, stiffness, and spasms may be severe enough to affect walking or standing. In some cases, MS leads to partial or complete paralysis and the use of a wheelchair is not uncommon, particularly in individuals who are untreated or have advanced disease. Many people with MS find that weakness and fatigue are worse when they have a fever or when they are exposed to heat. MS exacerbations may occur following common infections.
Pain is rarely the first sign of MS but pain often occurs with optic neuritis and trigeminal neuralgia, a disorder that affects one of the nerves that provides sensation to different parts of the face. Painful limb spasms and sharp pain shooting down the legs or around the abdomen can also be symptoms of MS.
Genetic susceptibility
MS itself is not inherited, but susceptibility to MS may be inherited. Studies show that some individuals with MS have one or more family member or relative who also have MS.
Current research suggests that dozens of genes and possibly hundreds of variations in the genetic code (gene variants) combine to create vulnerability to MS. Some of these genes have been identified, and most are associated with functions of the immune system. Many of the known genes are similar to those that have been identified in people with other autoimmune diseases as type 1 diabetes, rheumatoid arthritis, or lupus.
Infectious factors and viruses
Several viruses have been found in people with MS, but the virus most consistently linked to the development of MS is the Epstein-Barr virus (EBV) which causes infectious mononucleosis.
Only about five percent of the population has not been infected by EBV. These individuals are at a lower risk for developing MS than those who have been infected. People who were infected with EBV in adolescence or adulthood, and who therefore develop an exaggerated immune response to EBV, are at a significantly higher risk for developing MS than those who were infected in early childhood. This suggests that it may be the type of immune response to EBV that may lead to MS, rather than EBV infection itself. However, there is still no proof that EBV causes MS and the mechanisms that underlie this process are poorly understood.
Environmental factors
Several studies indicate that people who spend more time in the sun and those with relatively higher levels of vitamin D are less likely to develop MS or have a less severe course of disease and fewer relapses. Bright sunlight helps human skin produce vitamin D. Researchers believe that vitamin D may help regulate the immune system in ways that reduce the risk of MS or autoimmunity in general. People from regions near the equator, where there is a great deal of bright sunlight, generally have a much lower risk of MS than people from temperate areas such as the U.S. and Canada.
Studies have found that people who smoke are more likely to develop MS and have a more aggressive disease course. Indeed, people who smoke tend to have more brain lesions and brain shrinkage than non-smokers.
How is multiple sclerosis diagnosed and treated?
Diagnosing MS
There is no single test used to diagnose MS. The disease is confirmed when symptoms and signs develop and are related to different parts of the nervous system at more than one interval and after other alternative diagnoses have been excluded.
Doctors use different tests to rule out or confirm the diagnosis. In addition to a complete medical history, physical examination, and a detailed neurological examination, a doctor may recommend:
MRI scans of the brain and spinal cord to look for the characteristic lesions of MS. A special dye or contrast agent may be injected into a vein to enhance brain images of the active MS lesions.
Lumbar puncture (sometimes called a spinal tap) to obtain a sample of cerebrospinal fluid and examine it for proteins and inflammatory cells associated with the disease. Spinal tap analysis also can rule out diseases that may look like MS.
Evoked potential tests, which use electrodes placed on the skin and painless electric signals to measure how quickly and accurately the nervous system responds to stimulation.
Treating MS
There is no cure for MS, but there are treatments that can reduce the number and severity of relapses and delay the long-term disability progression of the disease.
Corticosteroids, such as intravenous (infused into a vein) methylprednisolone, are prescribed over the course of three to five days. Intravenous steroids quickly and potently suppress the immune system and reduce inflammation. They may be followed by a tapered dose of oral corticosteroids. Clinical trials have shown that these drugs hasten recovery from MS attacks, but do not alter the long-term outcome of the disease.
Plasma exchange (plasmapheresis) can treat severe flare-ups in people with relapsing forms of MS who do not have a good response to methylprednisolone. Plasma exchange involves taking blood out of the body and removing components in the blood's plasma that are thought to be harmful. The rest of the blood, plus replacement plasma, is then transfused back into the body. This treatment has not been shown to be effective for secondary progressive or chronic progressive MS.
Disease-modifying treatments
Current therapies approved by the U.S. Food and Drug Administration (FDA) for MS are designed to modulate or suppress the inflammatory reactions of the disease. They are most effective for relapsing-remitting MS at early stages of the disease.
----------------
[Query]
==========
What does the attached document have to say about the prognosis of someone diagnosed with malignant MS?
----------------
[Task Instructions]
==========
Use information present in the text to support your response. Do not use outside information. |
"================
<TEXT PASSAGE>
=======
[context document]
================
<QUESTION>
=======
[user request]
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." | I am doing some research into my asthma and I want a little help understanding what's happening to me. Can you explain more about the symptoms and causes? | Asthma is a prevalent chronic inflammatory respiratory condition affecting millions of people worldwide and presents substantial challenges in both diagnosis and management. This respiratory condition is characterized by inflammation of the airways, causing intermittent airflow obstruction and bronchial hyperresponsiveness. The hallmark asthma symptoms include coughing, wheezing, and shortness of breath, which can be frequently exacerbated by triggers ranging from allergens to viral infections. Despite treatment advancements, disparities persist in asthma care, with variations in access to diagnosis, treatment, and patient education across different demographics. Severity varies from intermittent symptoms to life-threatening airway closure. Healthcare professionals establish a definitive diagnosis through patient history, physical examination, pulmonary function testing, and appropriate laboratory testing. Spirometry with a post-bronchodilator response (BDR) is the primary diagnostic test. Treatment focuses on providing continued education, routine symptom assessment, access to fast-acting bronchodilators, and appropriate controller medications tailored to disease severity.
Childhood
Wheezing caused by viral infections, particularly respiratory syncytial virus and human rhinovirus, may predispose infants and young children to develop asthma later in life. In addition, early-life exposure to air pollution, including combustion by-products from gas-fired appliances and indoor fires, obesity, and early puberty, also increases the risk of asthma.
Adulthood
The most significant risk factors for adult-onset asthma include tobacco smoke, occupational exposure, and adults with rhinitis or atopy. Studies also suggest a modest increase in asthma incidence among postmenopausal women taking hormone replacement therapy.
Furthermore, the following factors can contribute to asthma and airway hyperreactivity:
Exposure to environmental allergens such as house dust mites, animal allergens (especially from cats and dogs), cockroach allergens, and fungi
Physical activity or exercise
Conditions such as hyperventilation, gastroesophageal reflux disease, and chronic sinusitis
Hypersensitivity to aspirin or nonsteroidal anti-inflammatory drugs (NSAIDs), as well as sulfite sensitivity
Use of β-adrenergic receptor blockers, including ophthalmic preparations
Exposure to irritants such as household sprays and paint fumes
Contact with various high- and low-molecular-weight compounds found in insects, plants, latex, gums, diisocyanates, anhydrides, wood dust, and solder fluxes, which are associated with occupational asthma
Emotional factors or stress
Aspirin-Exacerbated Respiratory Disease
Aspirin-exacerbated respiratory disease (AERD) is a condition characterized by a combination of asthma, chronic rhinosinusitis with nasal polyposis, and NSAID intolerance. Patients with AERD present with upper and lower respiratory tract symptoms after ingesting aspirin or NSAIDs that inhibit cyclooxygenase-1 (COX-1). This condition arises from dysregulated arachidonic acid metabolism and the overproduction of leukotrienes involving the 5-lipoxygenase and cyclooxygenase pathways. AERD affects approximately 7% of adults with asthma.
Occupational-Induced Asthma
Two types of occupational asthma exist based on their appearance after a latency period:
Occupational asthma triggered by workplace sensitizers results from an allergic or immunological process associated with a latency period induced by both low- and high-molecular-weight agents. High-molecular-weight substances, such as flour, contain proteins and polysaccharides of plant or animal origin. Low-molecular-weight substances, like formaldehyde, form a sensitizing neoantigen when combined with a human protein.
Occupational asthma caused by irritants involves a nonallergic or nonimmunological process induced by gases, fumes, smoke, and aerosols.
Asthma prevalence in the United States differs among demographic groups, including age, gender, race, and socioeconomic status. The United States Centers for Disease Control and Prevention (CDC) estimates that around 25 million Americans are currently affected by asthma. Among individuals younger than 18, boys exhibit a higher prevalence compared to girls, while among adults, women are more commonly affected than men. Additionally, asthma prevalence is notably higher among Black individuals, with a prevalence of 10.1%, compared to White individuals at 8.1%. Hispanic Americans generally have a lower prevalence of 6.4%, except for those from Puerto Rico, where the prevalence rises to 12.8%. Moreover, underrepresented minorities and individuals living below the poverty line experience the highest incidence of asthma, along with heightened rates of asthma-related morbidity and mortality.
Similar to worldwide data, the mortality rate of asthma in the United States has also undergone a consistent decline. The current mortality rate is 9.86 per million compared to 15.09 per million in 2001. However, mortality rates remain consistently higher for Black patients compared to their White counterparts. According to the CDC, from 1999 to 2016, asthma death rates among adults aged 55 to 64 were 16.32 per 1 million persons, 9.95 per 1 million for females, 9.39 per 1 million for individuals who were not Hispanic or Latino, and notably higher at 25.60 per 1 million for Black patients.
Physical Examination
During physical examination, widespread, high-pitched wheezes are a characteristic finding associated with asthma. However, wheezing is not specific to asthma and is typically absent between acute exacerbations. Findings suggestive of a severe asthma exacerbation include tachypnea, tachycardia, a prolonged expiratory phase, reduced air movement, difficulty speaking in complete sentences or phrases, discomfort when lying supine due to breathlessness, and adopting a "tripod position."[27] The use of the accessory muscles of breathing during inspiration and pulsus paradoxus are additional indicators of a severe asthma attack.
Healthcare professionals may identify extrapulmonary findings that support the diagnosis of asthma, such as pale, boggy nasal mucous membranes, posterior pharyngeal cobblestoning, nasal polyps, and atopic dermatitis. Nasal polyps should prompt further inquiry about anosmia, chronic sinusitis, and aspirin sensitivity to evaluate for AERD. Although AERD is uncommon in children or adolescents, the presence of nasal polyps in a child with lower respiratory disease should prompt an evaluation for cystic fibrosis. Clubbing, characterized by bulbous fusiform enlargement of the distal portion of a digit, is not associated with asthma and should prompt evaluation for alternative diagnoses. Please see StatPearls' companion resource, "Nail Clubbing," for further information. | "================
<TEXT PASSAGE>
=======
Asthma is a prevalent chronic inflammatory respiratory condition affecting millions of people worldwide and presents substantial challenges in both diagnosis and management. This respiratory condition is characterized by inflammation of the airways, causing intermittent airflow obstruction and bronchial hyperresponsiveness. The hallmark asthma symptoms include coughing, wheezing, and shortness of breath, which can be frequently exacerbated by triggers ranging from allergens to viral infections. Despite treatment advancements, disparities persist in asthma care, with variations in access to diagnosis, treatment, and patient education across different demographics. Severity varies from intermittent symptoms to life-threatening airway closure. Healthcare professionals establish a definitive diagnosis through patient history, physical examination, pulmonary function testing, and appropriate laboratory testing. Spirometry with a post-bronchodilator response (BDR) is the primary diagnostic test. Treatment focuses on providing continued education, routine symptom assessment, access to fast-acting bronchodilators, and appropriate controller medications tailored to disease severity.
Childhood
Wheezing caused by viral infections, particularly respiratory syncytial virus and human rhinovirus, may predispose infants and young children to develop asthma later in life. In addition, early-life exposure to air pollution, including combustion by-products from gas-fired appliances and indoor fires, obesity, and early puberty, also increases the risk of asthma.
Adulthood
The most significant risk factors for adult-onset asthma include tobacco smoke, occupational exposure, and adults with rhinitis or atopy. Studies also suggest a modest increase in asthma incidence among postmenopausal women taking hormone replacement therapy.
Furthermore, the following factors can contribute to asthma and airway hyperreactivity:
Exposure to environmental allergens such as house dust mites, animal allergens (especially from cats and dogs), cockroach allergens, and fungi
Physical activity or exercise
Conditions such as hyperventilation, gastroesophageal reflux disease, and chronic sinusitis
Hypersensitivity to aspirin or nonsteroidal anti-inflammatory drugs (NSAIDs), as well as sulfite sensitivity
Use of β-adrenergic receptor blockers, including ophthalmic preparations
Exposure to irritants such as household sprays and paint fumes
Contact with various high- and low-molecular-weight compounds found in insects, plants, latex, gums, diisocyanates, anhydrides, wood dust, and solder fluxes, which are associated with occupational asthma
Emotional factors or stress
Aspirin-Exacerbated Respiratory Disease
Aspirin-exacerbated respiratory disease (AERD) is a condition characterized by a combination of asthma, chronic rhinosinusitis with nasal polyposis, and NSAID intolerance. Patients with AERD present with upper and lower respiratory tract symptoms after ingesting aspirin or NSAIDs that inhibit cyclooxygenase-1 (COX-1). This condition arises from dysregulated arachidonic acid metabolism and the overproduction of leukotrienes involving the 5-lipoxygenase and cyclooxygenase pathways. AERD affects approximately 7% of adults with asthma.
Occupational-Induced Asthma
Two types of occupational asthma exist based on their appearance after a latency period:
Occupational asthma triggered by workplace sensitizers results from an allergic or immunological process associated with a latency period induced by both low- and high-molecular-weight agents. High-molecular-weight substances, such as flour, contain proteins and polysaccharides of plant or animal origin. Low-molecular-weight substances, like formaldehyde, form a sensitizing neoantigen when combined with a human protein.
Occupational asthma caused by irritants involves a nonallergic or nonimmunological process induced by gases, fumes, smoke, and aerosols.
Asthma prevalence in the United States differs among demographic groups, including age, gender, race, and socioeconomic status. The United States Centers for Disease Control and Prevention (CDC) estimates that around 25 million Americans are currently affected by asthma. Among individuals younger than 18, boys exhibit a higher prevalence compared to girls, while among adults, women are more commonly affected than men. Additionally, asthma prevalence is notably higher among Black individuals, with a prevalence of 10.1%, compared to White individuals at 8.1%. Hispanic Americans generally have a lower prevalence of 6.4%, except for those from Puerto Rico, where the prevalence rises to 12.8%. Moreover, underrepresented minorities and individuals living below the poverty line experience the highest incidence of asthma, along with heightened rates of asthma-related morbidity and mortality.
Similar to worldwide data, the mortality rate of asthma in the United States has also undergone a consistent decline. The current mortality rate is 9.86 per million compared to 15.09 per million in 2001. However, mortality rates remain consistently higher for Black patients compared to their White counterparts. According to the CDC, from 1999 to 2016, asthma death rates among adults aged 55 to 64 were 16.32 per 1 million persons, 9.95 per 1 million for females, 9.39 per 1 million for individuals who were not Hispanic or Latino, and notably higher at 25.60 per 1 million for Black patients.
Physical Examination
During physical examination, widespread, high-pitched wheezes are a characteristic finding associated with asthma. However, wheezing is not specific to asthma and is typically absent between acute exacerbations. Findings suggestive of a severe asthma exacerbation include tachypnea, tachycardia, a prolonged expiratory phase, reduced air movement, difficulty speaking in complete sentences or phrases, discomfort when lying supine due to breathlessness, and adopting a "tripod position."[27] The use of the accessory muscles of breathing during inspiration and pulsus paradoxus are additional indicators of a severe asthma attack.
Healthcare professionals may identify extrapulmonary findings that support the diagnosis of asthma, such as pale, boggy nasal mucous membranes, posterior pharyngeal cobblestoning, nasal polyps, and atopic dermatitis. Nasal polyps should prompt further inquiry about anosmia, chronic sinusitis, and aspirin sensitivity to evaluate for AERD. Although AERD is uncommon in children or adolescents, the presence of nasal polyps in a child with lower respiratory disease should prompt an evaluation for cystic fibrosis. Clubbing, characterized by bulbous fusiform enlargement of the distal portion of a digit, is not associated with asthma and should prompt evaluation for alternative diagnoses. Please see StatPearls' companion resource, "Nail Clubbing," for further information.
https://www.ncbi.nlm.nih.gov/books/NBK430901/
================
<QUESTION>
=======
I am doing some research into my asthma and I want a little help understanding what's happening to me. Can you explain more about the symptoms and causes?
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." |
"================
<TEXT PASSAGE>
=======
[context document]
================
<QUESTION>
=======
[user request]
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." | With the advancement of technology and no signs of it slowing down, I am worried about career as a filmmaker and content creator. What fields are booming in tech and how will AI affect its productivity when in relation to the human job market? I do not need to know a lot, just the fields to research. I'm specifically interested in what AI tools can do in the content production realm. can you list the fields and give me a rundown on what AI is taking over in the production field? | Four evolving technology trends modernizing the consumer products and retail industry
1. Artificial intelligence/machine learning (AI/ML) and microcomputing to optimize and enhance experience and supply chain
Why it’s important: In the CP&R industry, personalized experiences and efficient supply chains are paramount for winning in the market. Artificial intelligence and machine learning (AI/ML) and microcomputing technologies are crucial for achieving these objectives by enabling real-time data ingestion and action across customer interactions. This, in turn, empowers businesses to understand consumer preferences at a granular and even hyper-local level, driving increased sales and profitability, brand engagement and loyalty, and streamlined supply chain efforts.
Now and in the future, brands and retailers will implement AI/ML across a host of use cases, such as:
2. Generative AI for content creation and innovation
Generative AI holds immense significance in CP&R for its ability to foster innovation in content creation and product development. By harnessing the power of GenAI, businesses can produce fresh and engaging content, get to market with speed and build rapid customer engagement models.
GenAI is still in its early stages and so are its applications in CP&R, but it’s already clear that the possibilities are endless.
Content generation and customization for marketing
Product and promotional development and design
Virtual shopping assistants
Supplier communication and negotiation
Quality control and defect detection
CP&R companies should consider integrating GenAI into operations more broadly across the value chain. Organizations that find the most appropriate use cases and implement them at scale will drive the operational agility that industry stakeholders have been expecting for years. It will be critical to rethink how talent and capital allocation can be repositioned to better drive value when content and innovation can be available at the drop of a hat.
3. Digital twin and predictive analytics to drive process controls and decision-making
Digital twin technology and predictive analytics play a pivotal role in revolutionizing CP&R operations. They facilitate agility and offer a comprehensive view of product lifecycles, supply chains and manufacturing processes.
Digital twin and predictive analytics are not new in CP&R, but applications for their use are becoming more robust
Design and development optimization for consumer goods
Manufacturing process optimization
Inventory management and demand forecasting in retail
As the CP&R industry becomes increasingly more digitally connected and complex, driven by software proliferation and the Internet of Things (IoT), companies should expand their use of digital twins to a wider range of interconnected value chain nodes. This approach will enable a more proactive response to disruption and market shifts by transforming these tools into a means for a truly dynamic enterprise, from the front office through to the back office.
4. Cloud and ERP upgrades for efficiency and scalability
With enterprise resource planning (ERP) upgrades imminent by 2025, modernization is foundational to integrating evolving technology capabilities. Cloud computing provides on-demand data storage and computing power, which is essential for supporting and scaling these new technologies.
CP&R executives should be considering these applications to derive the most value from Cloud and ERP upgrades.
Connecting systems from front to back office
Real-time analytics and computing power
Enhanced data security and compliance
Companies must look at their legacy transaction systems and rationalize how to modernize them to create efficiencies, whether by integrating evolving tech that makes their systems more usable or by upgrading legacy transaction systems to keep pace with their front-end infrastructure.
Considerations for consumer products and retail leaders to help transform evolving tech trends into ‘force multipliers’
It's imperative to recognize that the true power and value behind an evolving digital landscape lie not just in the technologies themselves but in how companies strategically integrate and orchestrate them into their operations and strategic initiatives. The following considerations serve as a guide to help CP&R leaders start a journey down this transformative path.
Data strategy: Establish robust data quality and governance frameworks, as inaccurate or poor-quality data can undermine the success of tech implementations.
Zero-party data: Capitalize on data provided directly by consumers to personalize experiences and tailor product offerings to meet individual preferences.
How EY can help
Read more
Evaluating the tech ecosystem: Continuously assess your technology ecosystem by building strategic alliances and leveraging partnerships to gain access to cutting-edge technologies and expertise, further driving innovation, agility and competitive advantage.
Collaboration and co-opetition: Encourage data- and tech-enabled collaboration both within your organization and externally, as partnerships with suppliers, distributors and even competitors can foster innovation and create a more agile system.
Governance: Establish clear governance policies and ethical guidelines for responsible technology use, particularly in areas like AI and data analytics.
Talent agenda: Invest in employee training and upskilling to enable your workforce to effectively utilize technology and adapt to new tools and workflows. | "================
<TEXT PASSAGE>
=======
Four evolving technology trends modernizing the consumer products and retail industry
1. Artificial intelligence/machine learning (AI/ML) and microcomputing to optimize and enhance experience and supply chain
Why it’s important: In the CP&R industry, personalized experiences and efficient supply chains are paramount for winning in the market. Artificial intelligence and machine learning (AI/ML) and microcomputing technologies are crucial for achieving these objectives by enabling real-time data ingestion and action across customer interactions. This, in turn, empowers businesses to understand consumer preferences at a granular and even hyper-local level, driving increased sales and profitability, brand engagement and loyalty, and streamlined supply chain efforts.
Now and in the future, brands and retailers will implement AI/ML across a host of use cases, such as:
2. Generative AI for content creation and innovation
Generative AI holds immense significance in CP&R for its ability to foster innovation in content creation and product development. By harnessing the power of GenAI, businesses can produce fresh and engaging content, get to market with speed and build rapid customer engagement models.
GenAI is still in its early stages and so are its applications in CP&R, but it’s already clear that the possibilities are endless.
Content generation and customization for marketing
Product and promotional development and design
Virtual shopping assistants
Supplier communication and negotiation
Quality control and defect detection
CP&R companies should consider integrating GenAI into operations more broadly across the value chain. Organizations that find the most appropriate use cases and implement them at scale will drive the operational agility that industry stakeholders have been expecting for years. It will be critical to rethink how talent and capital allocation can be repositioned to better drive value when content and innovation can be available at the drop of a hat.
3. Digital twin and predictive analytics to drive process controls and decision-making
Digital twin technology and predictive analytics play a pivotal role in revolutionizing CP&R operations. They facilitate agility and offer a comprehensive view of product lifecycles, supply chains and manufacturing processes.
Digital twin and predictive analytics are not new in CP&R, but applications for their use are becoming more robust
Design and development optimization for consumer goods
Manufacturing process optimization
Inventory management and demand forecasting in retail
As the CP&R industry becomes increasingly more digitally connected and complex, driven by software proliferation and the Internet of Things (IoT), companies should expand their use of digital twins to a wider range of interconnected value chain nodes. This approach will enable a more proactive response to disruption and market shifts by transforming these tools into a means for a truly dynamic enterprise, from the front office through to the back office.
4. Cloud and ERP upgrades for efficiency and scalability
With enterprise resource planning (ERP) upgrades imminent by 2025, modernization is foundational to integrating evolving technology capabilities. Cloud computing provides on-demand data storage and computing power, which is essential for supporting and scaling these new technologies.
CP&R executives should be considering these applications to derive the most value from Cloud and ERP upgrades.
Connecting systems from front to back office
Real-time analytics and computing power
Enhanced data security and compliance
Companies must look at their legacy transaction systems and rationalize how to modernize them to create efficiencies, whether by integrating evolving tech that makes their systems more usable or by upgrading legacy transaction systems to keep pace with their front-end infrastructure.
Considerations for consumer products and retail leaders to help transform evolving tech trends into ‘force multipliers’
It's imperative to recognize that the true power and value behind an evolving digital landscape lie not just in the technologies themselves but in how companies strategically integrate and orchestrate them into their operations and strategic initiatives. The following considerations serve as a guide to help CP&R leaders start a journey down this transformative path.
Data strategy: Establish robust data quality and governance frameworks, as inaccurate or poor-quality data can undermine the success of tech implementations.
Zero-party data: Capitalize on data provided directly by consumers to personalize experiences and tailor product offerings to meet individual preferences.
How EY can help
Read more
Evaluating the tech ecosystem: Continuously assess your technology ecosystem by building strategic alliances and leveraging partnerships to gain access to cutting-edge technologies and expertise, further driving innovation, agility and competitive advantage.
Collaboration and co-opetition: Encourage data- and tech-enabled collaboration both within your organization and externally, as partnerships with suppliers, distributors and even competitors can foster innovation and create a more agile system.
Governance: Establish clear governance policies and ethical guidelines for responsible technology use, particularly in areas like AI and data analytics.
Talent agenda: Invest in employee training and upskilling to enable your workforce to effectively utilize technology and adapt to new tools and workflows.
https://www.ey.com/en_us/insights/consumer-products/how-embracing-technology-trends-can-drive-leadership-in-the-next
================
<QUESTION>
=======
With the advancement of technology and no signs of it slowing down, I am worried about career as a filmmaker and content creator. What fields are booming in tech and how will AI affect its productivity when in relation to the human job market? I do not need to know a lot, just the fields to research. I'm specifically interested in what AI tools can do in the content production realm. can you list the fields and give me a rundown on what AI is taking over in the production field?
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | What are key changes to the Social Security program for 2024, related to the cost-of-living adjustments(COLA), taxable earning limit, and disability benefits, and how do they impact recipients? | 7 New Social Security Changes for 2024
The 3.2% COLA for 2024 reflects a drop in inflation—and the 2025 COLA is expected soon
By Rebecca Rosenberg Updated September 13, 2024
Reviewed by Charlene Rhinehart
Fact checked by Rebecca McClay
Part of the Series
Understanding Social Security
Every October, the U.S. Social Security Administration (SSA) announces its annual changes to the Social Security program for the following year. For 2024, the changes consist of a 3.2% cost-of-living adjustment (COLA) to the monthly benefit amount, an increase in the maximum earnings subject to the Social Security tax, a rise in disability benefits, and more.
1
Key Takeaways
Those who are receiving Social Security benefits got a 3.2% raise in 2024.
Social Security tax rates for 2024 are 6.2% for employees and 12.4% for the self-employed.
In 2024, it takes $1,730 to earn a Social Security credit.
The Social Security Administration is expected to release the 2025 COLA soon.
1. COLA Increase
While we don't yet know what the cost-of-living adjustment (COLA) will be for 2025, more than 71 million Social Security recipients received a COLA increase to their monthly benefits of 3.2% in 2024.
1
The adjustment helps benefits keep pace with inflation and is based on the Consumer Price Index for Urban Wage Earners and Clerical Workers (CPI-W) calculated by the U.S. Bureau of Labor Statistics (BLS).
Based on the increase for 2024, the average monthly benefit for all retired workers is $1,907, up from $1,848.
2
2. Higher Maximum Monthly Payout
The earliest individuals can claim Social Security retirement benefits is age 62. However, claiming before full retirement age (FRA) will result in a permanently reduced payout.
3
In 1983, Congress passed a law increasing the full retirement age by two months each year from 2000 to 2022, until it hit 67. In 2024, anyone born in 1960 or later will not reach full retirement age until they are 67.
4
3
Those who earn delayed retirement credits—that is, waiting to claim Social Security past full (or normal) retirement age—can collect more than their full, or normal, payout. In 2024, the maximum payout of a worker retiring at full retirement age is $3,822. Retiring at age 70 means a maximum payout of $4,873.
5
Take the Next Step to Invest
Advertiser Disclosure
Earning retirement income above a certain threshold—$22,320 in 2024—will temporarily reduce your benefits before your full retirement age. Once you reach full retirement age, you can work as much as you want and your benefits won't be reduced. You'll still receive your full Social Security benefits.
6
Individuals can earn an additional 8% of their benefit per year up until age 70 by delaying retirement.
7
Custom illustration shows a woman stands at a table looking at a cake with the number 62 on top
A woman stands at a table looking at a cake with the number 62 on it. You can claim Social Security benefits as early as age 62, but you won’t receive your maximum benefit.
Xiaojie Liu / Investopedia
3. Earnings Limits Increased
For recipients who work while collecting Social Security benefits, all or part of their benefits may be temporarily withheld, depending on how much they earn. Before reaching full retirement age, recipients can earn up to $22,320 in 2024. After that, $1 will be deducted from their payment for every $2 that exceeds the limit.
8
Individuals who reach full retirement age in 2024 can earn $59,520, up $3,000 from the 2023 limit of $56,520. For every $3 you earn over the limit, your Social Security benefits will be reduced by $1 for money earned in the months before full retirement age. Once full retirement age is reached, no benefits will be withheld if recipients continue to work.
2
4. Taxable Earnings Rose
Employees paid the 6.2% Social Security tax, with their employer matching that payment, on income of up to $160,200 in 2023. In 2024, the maximum taxable earnings increased to $168,600. The Social Security tax rate remains at 6.2% and 12.4% for the self-employed.
2
5. Disability Benefits and Income Thresholds Increased
Social Security Disability Insurance (SSDI) provides income for those who can no longer work due to a disability. More than 8.9 million people in the United States who are receiving Social Security disability benefits received a 3.2% increase in 2024.
1
Disabled workers receive on average $1,537 per month in 2024, up from $1,489 in 2023. Disabled workers with a spouse and one or more children can expect an average of $2,720.
2
Blind workers have a cap of $2,590 per month in 2024.
2
6. Higher Credit Earning Threshold
Those born in 1929 or later must earn at least 40 credits (maximum of four per year) over their working life to qualify for Social Security benefits. The amount it takes to earn a single credit goes up each year.
9
For 2024, it will take $1,730 in earnings per credit.
2
The number of credits needed for SSDI depends on the age when the recipient becomes disabled.
7. Increase in Medicare Part B Premiums
Premiums for Medicare Part B, determined according to the Social Security Act, rose in 2024. The standard monthly premium for Medicare Part B is $174.70 for 2024, up from $164.90 in 2023. The annual deductible for Medicare Part B is $240 in 2024.
10
Program Funding Through 2035
According to the 2024 Social Security and Medicare Boards of Trustees annual report, Social Security and Medicare programs face future financing issues. The Old-Age and Survivors Insurance (OASI) Trust Fund and the Disability Insurance (DI) Trust Fund are combined to create the OASDI, used to indicate the status of the Social Security program.
11
As of 2024, OASDI is projected to pay 100% of total scheduled benefits until 2035, At that point, the projected fund's reserves will be depleted and the continuing total fund income will pay 83% of expected benefits.
11
The Old-Age and Survivors Insurance (OASI) Trust Fund is projected to pay 100% of scheduled benefits until 2033. The fund's reserves will be depleted and continuing program income will be able to pay 79% of benefits. The Disability Insurance (DI) Trust Fund is projected to support 100% of benefits through 2098.
11
What Is the Highest Social Security Benefit in 2024?
The maximum Social Security benefit for a worker retiring at full retirement age in 2024 is $3,822 monthly. Though uncommon, it's possible to be eligible for triple the Social Security benefits: Social Security retirement benefits, Social Security Disability Insurance (SSDI), and Supplemental Security Income (SSI). Individuals can check their full retirement age on the Social Security Administration’s Retirement Age Calculator.
2
12
What Is the Cost-of-Living Adjustment (COLA) for the Military in 2024?
Cost-of-living adjustments (COLAs) for pay for retired military members increased to 3.2% in 2024, depending on the time of retirement.
13
Can a Divorced Person Collect Their Ex-Spouse’s Social Security?
Individuals who divorced but were married to a spouse for more than 10 years can likely claim some portion of their spouse’s Social Security benefits. They must be unmarried when collecting Social Security benefits. The widow’s benefit is 71% to 100% of what a spouse received before they died.
14
The Bottom Line
Social Security benefits increased in 2024 with a COLA based on inflation. The ideal time to take retirement benefits depends on an individual's financial situation and retirement goals. | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
What are key changes to the Social Security program for 2024, related to the cost-of-living adjustments(COLA), taxable earning limit, and disability benefits, and how do they impact recipients?
{passage 0}
==========
7 New Social Security Changes for 2024
The 3.2% COLA for 2024 reflects a drop in inflation—and the 2025 COLA is expected soon
By Rebecca Rosenberg Updated September 13, 2024
Reviewed by Charlene Rhinehart
Fact checked by Rebecca McClay
Part of the Series
Understanding Social Security
Every October, the U.S. Social Security Administration (SSA) announces its annual changes to the Social Security program for the following year. For 2024, the changes consist of a 3.2% cost-of-living adjustment (COLA) to the monthly benefit amount, an increase in the maximum earnings subject to the Social Security tax, a rise in disability benefits, and more.
1
Key Takeaways
Those who are receiving Social Security benefits got a 3.2% raise in 2024.
Social Security tax rates for 2024 are 6.2% for employees and 12.4% for the self-employed.
In 2024, it takes $1,730 to earn a Social Security credit.
The Social Security Administration is expected to release the 2025 COLA soon.
1. COLA Increase
While we don't yet know what the cost-of-living adjustment (COLA) will be for 2025, more than 71 million Social Security recipients received a COLA increase to their monthly benefits of 3.2% in 2024.
1
The adjustment helps benefits keep pace with inflation and is based on the Consumer Price Index for Urban Wage Earners and Clerical Workers (CPI-W) calculated by the U.S. Bureau of Labor Statistics (BLS).
Based on the increase for 2024, the average monthly benefit for all retired workers is $1,907, up from $1,848.
2
2. Higher Maximum Monthly Payout
The earliest individuals can claim Social Security retirement benefits is age 62. However, claiming before full retirement age (FRA) will result in a permanently reduced payout.
3
In 1983, Congress passed a law increasing the full retirement age by two months each year from 2000 to 2022, until it hit 67. In 2024, anyone born in 1960 or later will not reach full retirement age until they are 67.
4
3
Those who earn delayed retirement credits—that is, waiting to claim Social Security past full (or normal) retirement age—can collect more than their full, or normal, payout. In 2024, the maximum payout of a worker retiring at full retirement age is $3,822. Retiring at age 70 means a maximum payout of $4,873.
5
Take the Next Step to Invest
Advertiser Disclosure
Earning retirement income above a certain threshold—$22,320 in 2024—will temporarily reduce your benefits before your full retirement age. Once you reach full retirement age, you can work as much as you want and your benefits won't be reduced. You'll still receive your full Social Security benefits.
6
Individuals can earn an additional 8% of their benefit per year up until age 70 by delaying retirement.
7
Custom illustration shows a woman stands at a table looking at a cake with the number 62 on top
A woman stands at a table looking at a cake with the number 62 on it. You can claim Social Security benefits as early as age 62, but you won’t receive your maximum benefit.
Xiaojie Liu / Investopedia
3. Earnings Limits Increased
For recipients who work while collecting Social Security benefits, all or part of their benefits may be temporarily withheld, depending on how much they earn. Before reaching full retirement age, recipients can earn up to $22,320 in 2024. After that, $1 will be deducted from their payment for every $2 that exceeds the limit.
8
Individuals who reach full retirement age in 2024 can earn $59,520, up $3,000 from the 2023 limit of $56,520. For every $3 you earn over the limit, your Social Security benefits will be reduced by $1 for money earned in the months before full retirement age. Once full retirement age is reached, no benefits will be withheld if recipients continue to work.
2
4. Taxable Earnings Rose
Employees paid the 6.2% Social Security tax, with their employer matching that payment, on income of up to $160,200 in 2023. In 2024, the maximum taxable earnings increased to $168,600. The Social Security tax rate remains at 6.2% and 12.4% for the self-employed.
2
5. Disability Benefits and Income Thresholds Increased
Social Security Disability Insurance (SSDI) provides income for those who can no longer work due to a disability. More than 8.9 million people in the United States who are receiving Social Security disability benefits received a 3.2% increase in 2024.
1
Disabled workers receive on average $1,537 per month in 2024, up from $1,489 in 2023. Disabled workers with a spouse and one or more children can expect an average of $2,720.
2
Blind workers have a cap of $2,590 per month in 2024.
2
6. Higher Credit Earning Threshold
Those born in 1929 or later must earn at least 40 credits (maximum of four per year) over their working life to qualify for Social Security benefits. The amount it takes to earn a single credit goes up each year.
9
For 2024, it will take $1,730 in earnings per credit.
2
The number of credits needed for SSDI depends on the age when the recipient becomes disabled.
7. Increase in Medicare Part B Premiums
Premiums for Medicare Part B, determined according to the Social Security Act, rose in 2024. The standard monthly premium for Medicare Part B is $174.70 for 2024, up from $164.90 in 2023. The annual deductible for Medicare Part B is $240 in 2024.
10
Program Funding Through 2035
According to the 2024 Social Security and Medicare Boards of Trustees annual report, Social Security and Medicare programs face future financing issues. The Old-Age and Survivors Insurance (OASI) Trust Fund and the Disability Insurance (DI) Trust Fund are combined to create the OASDI, used to indicate the status of the Social Security program.
11
As of 2024, OASDI is projected to pay 100% of total scheduled benefits until 2035, At that point, the projected fund's reserves will be depleted and the continuing total fund income will pay 83% of expected benefits.
11
The Old-Age and Survivors Insurance (OASI) Trust Fund is projected to pay 100% of scheduled benefits until 2033. The fund's reserves will be depleted and continuing program income will be able to pay 79% of benefits. The Disability Insurance (DI) Trust Fund is projected to support 100% of benefits through 2098.
11
What Is the Highest Social Security Benefit in 2024?
The maximum Social Security benefit for a worker retiring at full retirement age in 2024 is $3,822 monthly. Though uncommon, it's possible to be eligible for triple the Social Security benefits: Social Security retirement benefits, Social Security Disability Insurance (SSDI), and Supplemental Security Income (SSI). Individuals can check their full retirement age on the Social Security Administration’s Retirement Age Calculator.
2
12
What Is the Cost-of-Living Adjustment (COLA) for the Military in 2024?
Cost-of-living adjustments (COLAs) for pay for retired military members increased to 3.2% in 2024, depending on the time of retirement.
13
Can a Divorced Person Collect Their Ex-Spouse’s Social Security?
Individuals who divorced but were married to a spouse for more than 10 years can likely claim some portion of their spouse’s Social Security benefits. They must be unmarried when collecting Social Security benefits. The widow’s benefit is 71% to 100% of what a spouse received before they died.
14
The Bottom Line
Social Security benefits increased in 2024 with a COLA based on inflation. The ideal time to take retirement benefits depends on an individual's financial situation and retirement goals.
https://www.investopedia.com/retirement/social-security-changes/ |
Draw your answer from the text below only. | Summarize the different resources offered and list the pros and cons of each. | Introduction
CHFA’s vision is that everyone will have the opportunity for housing stability and economic prosperity, two things that underserved markets often consider out of reach. To support recent initiatives to reduce the homeownership gap between White homeowners and minority homeowners, CHFA is continuously engaging with community to learn how to provide products and services in a meaningful way.
Throughout this engagement, CHFA has heard two consistent requests: to meet homebuyers where they are, and to provide expansive homebuying resources that they can trust to support them on the path to homeownership. Often, community members cite the overabundance of information as being overwhelming and sometimes misleading. As the state housing and finance authority, CHFA is looked to as a trusted resource, so we developed the Homebuyer’s Roadmap to help people access requested information at their own pace.
The Homebuyer’s Roadmap is available online and in a printed folder with supplemental inserts so homebuyers can select the option that works best for them. It begins by introducing CHFA and helping the user understand CHFA’s mission and vision, along with how CHFA can help them on their homebuying journey, before breaking the homebuying process into 10 “stops on the homebuying journey.” The interactive online version allows users to “choose their own adventure” by selecting the stop of most interest based on where they are in the homebuying process. They can also dig deeper into topics of interest by using the additional resource links throughout, which open as pop-ups with summary information or in a new tab. For the printed folder, many of these additional resources are provided as inserts.
Navigation is intuitive; no matter where the user is within the Roadmap, they can use the navigation ribbon on the right of the screen to visit other stops, or the overarching navigation buttons on the bottom-right to visit the home page or the “stops” page. The Roadmap also encourages continued engagement with CHFA: after the final stop, which shares resources to help people maintain homeownership, users can click to a final screen showing links to newsletters, homebuying classes, and our “Help for Homebuyers” site.
The Roadmap has been well received. Since October 2023, it has been viewed by 1,335 users an average of 5.24 times each, showing that people are returning to use it as they progress in their homebuying journey. In addition, 182 people have requested a printed copy.
It is innovative and meets a state housing need.
Colorado Housing and Finance Authority CHFA Homebuyer’s Roadmap Homeownership – Empowering New Buyers
Colorado Housing and Finance Authority CHFA Homebuyer’s Roadmap Homeownership – Empowering New Buyers
Based on community feedback, we learned that there was a gap in current homebuying educational offerings. We heard from many potential homebuyers that while they wanted to be as informed as possible about the homebuying process, they felt overwhelmed by the sheer amount of information available. Other “roadmap” communications vacillated between two extremes: having limited space to convey adequate information (such as in a print flyer) or trying to include everything in an effort to educate, which resulted in readers losing focus or not being able to locate the exact information they were looking for (such as on a web page).
CHFA’s own products illustrate this gap: we offer a high-level Steps to Homeownership on our site, a one- hour Homebuyer 101 webinar, and an in-depth homebuyer education class for certification (classes last between six to eight hours). The shorter, more high-level options are suited for those just getting started, and the longer class is great for those homebuyers who are moving forward with a purchase, but we were missing the middle piece: a resource that was interactive, allowing homebuyers to navigate at their own pace and quickly find more in-depth information.
The design of the Roadmap was instrumental in this delivery. In addition to the intuitive navigation throughout, pop-ups, information buttons, and third-party links kept the screens uncluttered and let each user create their own experience. While there is a wealth of information, it never feels overwhelming. Utilizing movement throughout creates a fresh and engaging experience while highlighting the individual resources.
It demonstrates effective use of resources, benefits outweighing costs, and a replicable development process.
The Roadmap is an in-house product. Content was developed collaboratively by marketing and home finance team members, the design was completed in Adobe InDesign, and the finished digital product was easily uploaded and integrated into our website functionality. Content updates are easy to implement in the digital version, and we printed a low count of the folder to reduce waste when changes are needed. All the supplements in the folder are those that were already developed prior to the Roadmap launch and are used in other outreach and communication activities. A two-year content review schedule will help to ensure that information is accurate and current.
Overall costs were minimal; the development was incorporated into staff project flow, and the only additional costs are for the small-batch folder printing and individual mailings for requests. The accessibility of the product is also a benefit: it is free, print versions are mailed within one business day of request, and no specific software is required for viewing the online version.
It effectively employes partnerships with industry professionals.
When creating the Roadmap, we wanted to deliver something that lenders, real estate agents, and homebuyer education providers could use with their customers to elevate themselves as a resource. Throughout development, feedback was sought from our Lender Advisory Group and real estate agents regarding content and utilization. Traditionally, our partners value CHFA collateral that they can leverage
2
for homebuyer engagement. In the words of one, “Why would we do it if CHFA already has and much better than we could?”
It helps CHFA achieve strategic objectives.
One of CHFA’s main goals (and a market differentiator) is that we require homebuyer education course completion to be eligible for our loan products. This is because we know that an informed homebuyer makes a successful homeowner. Many customers cite the course as one of the best values of being a CHFA customer. The Homebuyer’s Roadmap is yet another way for us to help homebuyers feel informed and confident when purchasing a home.
As stated at the beginning, it is an accessible resource that helps to “meet people where they are,” be that where they are in the homebuying process, or in which medium they prefer (online or in print). By providing a simple design with interactive topics, the Homebuyer’s Roadmap allows homebuyers to “choose their own adventure” at the pace with which they are comfortable, and further establishes CHFA as a valuable resource for homebuyers and industry professionals. | Summarize the different resources offered and list the pros and cons of each.
Draw your answer from the text below only.
Introduction
CHFA’s vision is that everyone will have the opportunity for housing stability and economic prosperity, two things that underserved markets often consider out of reach. To support recent initiatives to reduce the homeownership gap between White homeowners and minority homeowners, CHFA is continuously engaging with community to learn how to provide products and services in a meaningful way.
Throughout this engagement, CHFA has heard two consistent requests: to meet homebuyers where they are, and to provide expansive homebuying resources that they can trust to support them on the path to homeownership. Often, community members cite the overabundance of information as being overwhelming and sometimes misleading. As the state housing and finance authority, CHFA is looked to as a trusted resource, so we developed the Homebuyer’s Roadmap to help people access requested information at their own pace.
The Homebuyer’s Roadmap is available online and in a printed folder with supplemental inserts so homebuyers can select the option that works best for them. It begins by introducing CHFA and helping the user understand CHFA’s mission and vision, along with how CHFA can help them on their homebuying journey, before breaking the homebuying process into 10 “stops on the homebuying journey.” The interactive online version allows users to “choose their own adventure” by selecting the stop of most interest based on where they are in the homebuying process. They can also dig deeper into topics of interest by using the additional resource links throughout, which open as pop-ups with summary information or in a new tab. For the printed folder, many of these additional resources are provided as inserts.
Navigation is intuitive; no matter where the user is within the Roadmap, they can use the navigation ribbon on the right of the screen to visit other stops, or the overarching navigation buttons on the bottom-right to visit the home page or the “stops” page. The Roadmap also encourages continued engagement with CHFA: after the final stop, which shares resources to help people maintain homeownership, users can click to a final screen showing links to newsletters, homebuying classes, and our “Help for Homebuyers” site.
The Roadmap has been well received. Since October 2023, it has been viewed by 1,335 users an average of 5.24 times each, showing that people are returning to use it as they progress in their homebuying journey. In addition, 182 people have requested a printed copy.
It is innovative and meets a state housing need.
Colorado Housing and Finance Authority CHFA Homebuyer’s Roadmap Homeownership – Empowering New Buyers
Colorado Housing and Finance Authority CHFA Homebuyer’s Roadmap Homeownership – Empowering New Buyers
Based on community feedback, we learned that there was a gap in current homebuying educational offerings. We heard from many potential homebuyers that while they wanted to be as informed as possible about the homebuying process, they felt overwhelmed by the sheer amount of information available. Other “roadmap” communications vacillated between two extremes: having limited space to convey adequate information (such as in a print flyer) or trying to include everything in an effort to educate, which resulted in readers losing focus or not being able to locate the exact information they were looking for (such as on a web page).
CHFA’s own products illustrate this gap: we offer a high-level Steps to Homeownership on our site, a one- hour Homebuyer 101 webinar, and an in-depth homebuyer education class for certification (classes last between six to eight hours). The shorter, more high-level options are suited for those just getting started, and the longer class is great for those homebuyers who are moving forward with a purchase, but we were missing the middle piece: a resource that was interactive, allowing homebuyers to navigate at their own pace and quickly find more in-depth information.
The design of the Roadmap was instrumental in this delivery. In addition to the intuitive navigation throughout, pop-ups, information buttons, and third-party links kept the screens uncluttered and let each user create their own experience. While there is a wealth of information, it never feels overwhelming. Utilizing movement throughout creates a fresh and engaging experience while highlighting the individual resources.
It demonstrates effective use of resources, benefits outweighing costs, and a replicable development process.
The Roadmap is an in-house product. Content was developed collaboratively by marketing and home finance team members, the design was completed in Adobe InDesign, and the finished digital product was easily uploaded and integrated into our website functionality. Content updates are easy to implement in the digital version, and we printed a low count of the folder to reduce waste when changes are needed. All the supplements in the folder are those that were already developed prior to the Roadmap launch and are used in other outreach and communication activities. A two-year content review schedule will help to ensure that information is accurate and current.
Overall costs were minimal; the development was incorporated into staff project flow, and the only additional costs are for the small-batch folder printing and individual mailings for requests. The accessibility of the product is also a benefit: it is free, print versions are mailed within one business day of request, and no specific software is required for viewing the online version.
It effectively employes partnerships with industry professionals.
When creating the Roadmap, we wanted to deliver something that lenders, real estate agents, and homebuyer education providers could use with their customers to elevate themselves as a resource. Throughout development, feedback was sought from our Lender Advisory Group and real estate agents regarding content and utilization. Traditionally, our partners value CHFA collateral that they can leverage
2
for homebuyer engagement. In the words of one, “Why would we do it if CHFA already has and much better than we could?”
It helps CHFA achieve strategic objectives.
One of CHFA’s main goals (and a market differentiator) is that we require homebuyer education course completion to be eligible for our loan products. This is because we know that an informed homebuyer makes a successful homeowner. Many customers cite the course as one of the best values of being a CHFA customer. The Homebuyer’s Roadmap is yet another way for us to help homebuyers feel informed and confident when purchasing a home.
As stated at the beginning, it is an accessible resource that helps to “meet people where they are,” be that where they are in the homebuying process, or in which medium they prefer (online or in print). By providing a simple design with interactive topics, the Homebuyer’s Roadmap allows homebuyers to “choose their own adventure” at the pace with which they are comfortable, and further establishes CHFA as a valuable resource for homebuyers and industry professionals. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | I would like the numerical information from the text restated in bullet point lists with the relevant textual descriptors. Please retain the section headings to use for organization. You may simplify word choices for the layperson to understand where applicable. | The increase in real GDP primarily reflected increases in consumer spending, private inventory investment, and nonresidential fixed investment. Imports, which are a subtraction in the calculation of GDP, increased (table 2).
Compared to the first quarter, the acceleration in real GDP in the second quarter primarily reflected an upturn in private inventory investment and an acceleration in consumer spending. These movements were partly offset by a downturn in residential fixed investment.
Current‑dollar GDP increased 5.5 percent at an annual rate, or $383.2 billion, in the second quarter to a level of $28.65 trillion, an upward revision of $23.2 billion from the previous estimate (tables 1 and 3). More information on the source data that underlie the estimates is available in the "Key Source Data and Assumptions" file on BEA's website.
The price index for gross domestic purchases increased 2.4 percent in the second quarter, an upward revision of 0.1 percentage point from the previous estimate. The personal consumption expenditures (PCE) price index increased 2.5 percent, a downward revision of 0.1 percentage point. Excluding food and energy prices, the PCE price index increased 2.8 percent, a downward revision of 0.1 percentage point.
Personal Income
Current-dollar personal income increased $233.6 billion in the second quarter, a downward revision of $4.0 billion from the previous estimate. The increase primarily reflected increases in compensation and personal current transfer receipts (table 8).
Disposable personal income increased $183.0 billion, or 3.6 percent, in the second quarter, a downward revision of $3.2 billion from the previous estimate. Real disposable personal income increased 1.0 percent, unrevised from the prior estimate.
Personal saving was $686.4 billion in the second quarter, a downward revision of $34.1 billion from the previous estimate. The personal saving rate—personal saving as a percentage of disposable personal income—was 3.3 percent in the second quarter, a downward revision of 0.2 percentage point.
Gross Domestic Income and Corporate Profits
Real gross domestic income (GDI) increased 1.3 percent in the second quarter, the same as in the first quarter. The average of real GDP and real GDI, a supplemental measure of U.S. economic activity that equally weights GDP and GDI, increased 2.1 percent in the second quarter, compared with an increase of 1.4 percent in the first quarter (table 1).
Profits from current production (corporate profits with inventory valuation and capital consumption adjustments) increased $57.6 billion in the second quarter, in contrast to a decrease of $47.1 billion in the first quarter (table 10).
Profits of domestic financial corporations increased $46.4 billion in the second quarter, compared with an increase of $65.0 billion in the first quarter. Profits of domestic nonfinancial corporations increased $29.2 billion, in contrast to a decrease of $114.5 billion. Rest-of-the-world profits decreased $18.0 billion, in contrast to an increase of $2.3 billion. In the second quarter, receipts decreased $6.2 billion, and payments increased $11.8 billion.
Updates to GDP
With the second estimate, an upward revision to consumer spending was partly offset by downward revisions to nonresidential fixed investment, exports, private inventory investment, federal government spending, state and local government spending, and residential fixed investment. Imports were revised up. For more information, refer to the Technical Note. For information on updates to GDP, refer to the "Additional Information" section that follows.
Advance Estimate Second Estimate
(Percent change from preceding quarter)
Real GDP 2.8 3.0
Current-dollar GDP 5.2 5.5
Real GDI … 1.3
Average of Real GDP and Real GDI … 2.1
Gross domestic purchases price index 2.3 2.4
PCE price index 2.6 2.5
PCE price index excluding food and energy 2.9 2.8 | [question]
I would like the numerical information from the text restated in bullet point lists with the relevant textual descriptors. Please retain the section headings to use for organization. You may simplify word choices for the layperson to understand where applicable.
=====================
[text]
The increase in real GDP primarily reflected increases in consumer spending, private inventory investment, and nonresidential fixed investment. Imports, which are a subtraction in the calculation of GDP, increased (table 2).
Compared to the first quarter, the acceleration in real GDP in the second quarter primarily reflected an upturn in private inventory investment and an acceleration in consumer spending. These movements were partly offset by a downturn in residential fixed investment.
Current‑dollar GDP increased 5.5 percent at an annual rate, or $383.2 billion, in the second quarter to a level of $28.65 trillion, an upward revision of $23.2 billion from the previous estimate (tables 1 and 3). More information on the source data that underlie the estimates is available in the "Key Source Data and Assumptions" file on BEA's website.
The price index for gross domestic purchases increased 2.4 percent in the second quarter, an upward revision of 0.1 percentage point from the previous estimate. The personal consumption expenditures (PCE) price index increased 2.5 percent, a downward revision of 0.1 percentage point. Excluding food and energy prices, the PCE price index increased 2.8 percent, a downward revision of 0.1 percentage point.
Personal Income
Current-dollar personal income increased $233.6 billion in the second quarter, a downward revision of $4.0 billion from the previous estimate. The increase primarily reflected increases in compensation and personal current transfer receipts (table 8).
Disposable personal income increased $183.0 billion, or 3.6 percent, in the second quarter, a downward revision of $3.2 billion from the previous estimate. Real disposable personal income increased 1.0 percent, unrevised from the prior estimate.
Personal saving was $686.4 billion in the second quarter, a downward revision of $34.1 billion from the previous estimate. The personal saving rate—personal saving as a percentage of disposable personal income—was 3.3 percent in the second quarter, a downward revision of 0.2 percentage point.
Gross Domestic Income and Corporate Profits
Real gross domestic income (GDI) increased 1.3 percent in the second quarter, the same as in the first quarter. The average of real GDP and real GDI, a supplemental measure of U.S. economic activity that equally weights GDP and GDI, increased 2.1 percent in the second quarter, compared with an increase of 1.4 percent in the first quarter (table 1).
Profits from current production (corporate profits with inventory valuation and capital consumption adjustments) increased $57.6 billion in the second quarter, in contrast to a decrease of $47.1 billion in the first quarter (table 10).
Profits of domestic financial corporations increased $46.4 billion in the second quarter, compared with an increase of $65.0 billion in the first quarter. Profits of domestic nonfinancial corporations increased $29.2 billion, in contrast to a decrease of $114.5 billion. Rest-of-the-world profits decreased $18.0 billion, in contrast to an increase of $2.3 billion. In the second quarter, receipts decreased $6.2 billion, and payments increased $11.8 billion.
Updates to GDP
With the second estimate, an upward revision to consumer spending was partly offset by downward revisions to nonresidential fixed investment, exports, private inventory investment, federal government spending, state and local government spending, and residential fixed investment. Imports were revised up. For more information, refer to the Technical Note. For information on updates to GDP, refer to the "Additional Information" section that follows.
Advance Estimate Second Estimate
(Percent change from preceding quarter)
Real GDP 2.8 3.0
Current-dollar GDP 5.2 5.5
Real GDI … 1.3
Average of Real GDP and Real GDI … 2.1
Gross domestic purchases price index 2.3 2.4
PCE price index 2.6 2.5
PCE price index excluding food and energy 2.9 2.8
https://www.bea.gov/news/2024/gross-domestic-product-second-estimate-corporate-profits-preliminary-estimate-second
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Respond using only information from the provided content.
Adhere to a 300-word limit.
Avoid responding in table format or JSON | According to the above text, what are the benefits of working a job in the tech industry? | **Getting a Job in the Tech Industry**
Because of the tech industry's rapid evolution, employees often possess both technical and nontechnical skills. Companies typically seek unique individuals who can strengthen their business as the industry grows, and some may not even require industry experience as a qualifier for candidates. If you're interested in advancing your career path and increasing your earning potential, consider researching tech job openings.
In this article, we explain what the tech industry is, what to expect as an employee, some benefits of working in tech and steps and tips to help you get a job in the tech industry.
What is the tech industry?
The tech industry encompasses several business sectors, like e-commerce, internet software and services, financial technology, consumer electronics and telecommunications. It's constantly evolving through innovation and new creative processes, which regularly create new jobs. Because there are so many job options, you can allow your interests to guide you towards a career you can enjoy, such as software development, programming or digital communications.
When you accept a job in the tech industry, there are a few things you can expect. For instance, many entry-level positions are technical support roles, so you may be responsible for answering inbound calls and performing troubleshooting to assist users remotely. Depending on the job requirements, you can perform these tasks either in an office or from home. This may involve collaborating with other IT specialists on projects or for user issue resolution.
Tech industry jobs also allow you to make real-world impacts by identifying and evaluating problems and innovating solutions. The tech industry mainly favors meritocracy, which encourages employees to focus on their abilities as opposed to their experience level. This concept can promote a positive and collaborative workplace and show a company's commitment to employee satisfaction. Many tech companies value this kind of work culture, and it often resonates in their brand message and company statements.
Benefits of working in tech
The tech industry offers a variety of unique benefits to its employees. Some of the most significant perks include:
Flexibility:
Many tech companies offer their employees flexible hours and working conditions, which can appeal to a variety of individuals. Mobile and remote tasks give employees the ability to work anywhere, and this can be an exciting and refreshing contrast to consistent office work. These unique assignments and nontraditional workspaces can empower you to innovate new solutions and contribute to an overall increase in productivity by exercising your time management and technical skills.
Work-life balance:
Another advantage of working in the tech industry is the ability to achieve and maintain an effective balance between work and other life activities. Because many technical jobs require remote or mobile tasks, you may be able to manage your time more efficiently by building a schedule that accommodates your personal and professional responsibilities. This can give you the ideal work-life balance, and it could encourage you to lead a successful and productive life.
Positive work environment:
Many tech companies offer substantial perks in their work environments, like complimentary food, a casual dress code and compatible residence areas. Your company might also provide paid time off, volunteer days and insurance. These perks contribute to an optimistic work environment, which can promote creativity, encourage innovation and support your career development in the tech industry.
Career growth and development:
Working in tech also offers several opportunities for career growth and skill development. You can refine your skills and improve your workflow with every task by applying your knowledge to practical experiences. The skills you learn and apply are transferrable, and they can increase your marketability and advance your career by appealing to potential employers. You can also consider applying them independently to create your own startup business.
How to get a job in the tech industry
Getting a job in the tech industry can be a rewarding opportunity and provide you with substantial earning potential. If you're interested in securing a tech job, consider reviewing these steps to help you succeed in your career goals:
1. Develop your technical skills
The first step to securing a job in the tech industry is to develop the technical skills necessary to excel in your career. This might include programming, data science, analytics, software engineering and development, digital marketing and project management. You can establish and improve these skills by researching, talking to industry professionals or reading respected tech publications like journals, newsletters or websites. Learning from those who work in the tech industry can help you understand which skills are essential and how they use them in their daily tasks.
2. Seek a mentor
Having a mentor can give you a distinct advantage in your career development because they impart their professional skills, provide industry knowledge and give you tips to aid in your success. Many mentors offer support, advice and encouragement to guide you towards a rewarding career in tech, and you can learn valuable techniques from those with years of firsthand experience. When seeking a mentor, consider searching for someone who's open-minded and willing to take suggestions. These qualities create a collaborative learning environment for you and your mentor, which strengthens both your skills and relationship.
3. Build your professional network
When you connect with others that share similar interests in the tech industry, you're building your professional network. Attending local conferences, contributing to online tech forums and talking to local professionals are all effective methods to expand your tech industry connections. These introductions can play a vital role in securing a career in technology because they give you opportunities to collaborate with others and gather helpful industry information, such as job listings, resume advice and tips from experienced individuals.
4. Pursue a technical certification
While there are several tech jobs available that don't require a bachelor's degree in qualified candidates, earning one in a related field may help you appeal to potential tech industry employers. Many colleges also offer vocational programs that offer certifications for various skills, like data security, engineering and project management. Consider researching different colleges and websites to learn about the certifications, degrees and intensive training courses that best support your career development.
5. Create a strong, customized resume.
When you apply for tech industry jobs, review the descriptions of the open positions that interest you. This helps you understand the requirements and important aspects of the role. It also gives you the opportunity to customize your resume to appeal to hiring managers. Each company, job and hiring process is unique, and adjusting your resume for each role can help differentiate you from other candidates. Consider including specific skills, tools and programs on your resume that you're familiar with. If a company uses a resume scanning program, keywords can increase the likelihood of the program selecting your resume for further review.
Here are some tips that can help you secure a job in the tech industry:
Research active job listings:
Consider researching active job listings to discover the positions that are currently available. This can help you find which areas of the tech industry interest you, and it may give you a better understanding of the roles that exist. You can also talk to industry professionals to learn their daily activities and necessary skills to determine if these aspects inspire you to seek a specific position.
Take advantage of online courses:
There are programs online that can help you learn valuable skills that can help you excel in the tech industry, like programming, coding or software development. These self-paced programs allow you to learn at your own pace and develop skills without committing to a single program or course, and many provide certifications that can help differentiate you from other candidates in the application process. Even if you don't possess industry experience, online courses can provide a valuable advantage by developing the essential skills that many tech jobs require.
Identify your outsider advantage:
Because the tech industry is constantly changing, many tech companies advertise nontechnical positions from human resources, product marketing or sales development to gain employees with different viewpoints. Candidates without technical experience can provide unique perspectives on how they communicate with technology. Hiring managers often seek candidates with adept communication skills and the ability to relate strongly to others to promote a collaborative work environment and increase project efficiency, so consider including these skills on your resume while applying for jobs.
Research tech startup companies:
Startup companies often forego traditional job requirements to focus more on training and candidate potential, and they usually seek qualified individuals with marketable skills and excellent communication abilities. With these skills and some technical experience, you can be an ideal candidate for many tech startup companies. Consider accepting an internship or finding a mentor so you can apply your technical skills, gain industry experience and become an appealing candidate to startup hiring managers.
Focus on your unique qualities:
When you apply for a job in the tech industry, you can differentiate yourself from other candidates by identifying which skills make you unique. Explaining nontechnical qualities like drive, determination and perseverance can enhance your resume and help you appeal to potential employers. You can also include general soft skills like problem solving, adaptability and quick learning to show hiring managers you're skillful in several areas that can benefit their company. | {Question}
=======
According to the above text, what are the benefits of working a job in the tech industry?
{Instruction}
=======
Respond using only information from the provided content.
Adhere to a 300-word limit.
Avoid responding in table format or JSON
{Context}
=======
**Getting a Job in the Tech Industry**
Because of the tech industry's rapid evolution, employees often possess both technical and nontechnical skills. Companies typically seek unique individuals who can strengthen their business as the industry grows, and some may not even require industry experience as a qualifier for candidates. If you're interested in advancing your career path and increasing your earning potential, consider researching tech job openings.
In this article, we explain what the tech industry is, what to expect as an employee, some benefits of working in tech and steps and tips to help you get a job in the tech industry.
What is the tech industry?
The tech industry encompasses several business sectors, like e-commerce, internet software and services, financial technology, consumer electronics and telecommunications. It's constantly evolving through innovation and new creative processes, which regularly create new jobs. Because there are so many job options, you can allow your interests to guide you towards a career you can enjoy, such as software development, programming or digital communications.
When you accept a job in the tech industry, there are a few things you can expect. For instance, many entry-level positions are technical support roles, so you may be responsible for answering inbound calls and performing troubleshooting to assist users remotely. Depending on the job requirements, you can perform these tasks either in an office or from home. This may involve collaborating with other IT specialists on projects or for user issue resolution.
Tech industry jobs also allow you to make real-world impacts by identifying and evaluating problems and innovating solutions. The tech industry mainly favors meritocracy, which encourages employees to focus on their abilities as opposed to their experience level. This concept can promote a positive and collaborative workplace and show a company's commitment to employee satisfaction. Many tech companies value this kind of work culture, and it often resonates in their brand message and company statements.
Benefits of working in tech
The tech industry offers a variety of unique benefits to its employees. Some of the most significant perks include:
Flexibility:
Many tech companies offer their employees flexible hours and working conditions, which can appeal to a variety of individuals. Mobile and remote tasks give employees the ability to work anywhere, and this can be an exciting and refreshing contrast to consistent office work. These unique assignments and nontraditional workspaces can empower you to innovate new solutions and contribute to an overall increase in productivity by exercising your time management and technical skills.
Work-life balance:
Another advantage of working in the tech industry is the ability to achieve and maintain an effective balance between work and other life activities. Because many technical jobs require remote or mobile tasks, you may be able to manage your time more efficiently by building a schedule that accommodates your personal and professional responsibilities. This can give you the ideal work-life balance, and it could encourage you to lead a successful and productive life.
Positive work environment:
Many tech companies offer substantial perks in their work environments, like complimentary food, a casual dress code and compatible residence areas. Your company might also provide paid time off, volunteer days and insurance. These perks contribute to an optimistic work environment, which can promote creativity, encourage innovation and support your career development in the tech industry.
Career growth and development:
Working in tech also offers several opportunities for career growth and skill development. You can refine your skills and improve your workflow with every task by applying your knowledge to practical experiences. The skills you learn and apply are transferrable, and they can increase your marketability and advance your career by appealing to potential employers. You can also consider applying them independently to create your own startup business.
How to get a job in the tech industry
Getting a job in the tech industry can be a rewarding opportunity and provide you with substantial earning potential. If you're interested in securing a tech job, consider reviewing these steps to help you succeed in your career goals:
1. Develop your technical skills
The first step to securing a job in the tech industry is to develop the technical skills necessary to excel in your career. This might include programming, data science, analytics, software engineering and development, digital marketing and project management. You can establish and improve these skills by researching, talking to industry professionals or reading respected tech publications like journals, newsletters or websites. Learning from those who work in the tech industry can help you understand which skills are essential and how they use them in their daily tasks.
2. Seek a mentor
Having a mentor can give you a distinct advantage in your career development because they impart their professional skills, provide industry knowledge and give you tips to aid in your success. Many mentors offer support, advice and encouragement to guide you towards a rewarding career in tech, and you can learn valuable techniques from those with years of firsthand experience. When seeking a mentor, consider searching for someone who's open-minded and willing to take suggestions. These qualities create a collaborative learning environment for you and your mentor, which strengthens both your skills and relationship.
3. Build your professional network
When you connect with others that share similar interests in the tech industry, you're building your professional network. Attending local conferences, contributing to online tech forums and talking to local professionals are all effective methods to expand your tech industry connections. These introductions can play a vital role in securing a career in technology because they give you opportunities to collaborate with others and gather helpful industry information, such as job listings, resume advice and tips from experienced individuals.
4. Pursue a technical certification
While there are several tech jobs available that don't require a bachelor's degree in qualified candidates, earning one in a related field may help you appeal to potential tech industry employers. Many colleges also offer vocational programs that offer certifications for various skills, like data security, engineering and project management. Consider researching different colleges and websites to learn about the certifications, degrees and intensive training courses that best support your career development.
5. Create a strong, customized resume.
When you apply for tech industry jobs, review the descriptions of the open positions that interest you. This helps you understand the requirements and important aspects of the role. It also gives you the opportunity to customize your resume to appeal to hiring managers. Each company, job and hiring process is unique, and adjusting your resume for each role can help differentiate you from other candidates. Consider including specific skills, tools and programs on your resume that you're familiar with. If a company uses a resume scanning program, keywords can increase the likelihood of the program selecting your resume for further review.
Here are some tips that can help you secure a job in the tech industry:
Research active job listings:
Consider researching active job listings to discover the positions that are currently available. This can help you find which areas of the tech industry interest you, and it may give you a better understanding of the roles that exist. You can also talk to industry professionals to learn their daily activities and necessary skills to determine if these aspects inspire you to seek a specific position.
Take advantage of online courses:
There are programs online that can help you learn valuable skills that can help you excel in the tech industry, like programming, coding or software development. These self-paced programs allow you to learn at your own pace and develop skills without committing to a single program or course, and many provide certifications that can help differentiate you from other candidates in the application process. Even if you don't possess industry experience, online courses can provide a valuable advantage by developing the essential skills that many tech jobs require.
Identify your outsider advantage:
Because the tech industry is constantly changing, many tech companies advertise nontechnical positions from human resources, product marketing or sales development to gain employees with different viewpoints. Candidates without technical experience can provide unique perspectives on how they communicate with technology. Hiring managers often seek candidates with adept communication skills and the ability to relate strongly to others to promote a collaborative work environment and increase project efficiency, so consider including these skills on your resume while applying for jobs.
Research tech startup companies:
Startup companies often forego traditional job requirements to focus more on training and candidate potential, and they usually seek qualified individuals with marketable skills and excellent communication abilities. With these skills and some technical experience, you can be an ideal candidate for many tech startup companies. Consider accepting an internship or finding a mentor so you can apply your technical skills, gain industry experience and become an appealing candidate to startup hiring managers.
Focus on your unique qualities:
When you apply for a job in the tech industry, you can differentiate yourself from other candidates by identifying which skills make you unique. Explaining nontechnical qualities like drive, determination and perseverance can enhance your resume and help you appeal to potential employers. You can also include general soft skills like problem solving, adaptability and quick learning to show hiring managers you're skillful in several areas that can benefit their company. |
Respond succinctly and directly. Refer only to the provided document. After your answer, provide any relevant quotes from the source document in italics. | What are the 5th gen Standard Series CPUs based on? | **Microsoft Azure SQL Database pricing**
vCore
A vCore-based purchase model is best if you are looking for flexibility, control and transparency of individual resource consumption. This model allows you to scale compute, memory and storage based upon your workload needs and provides a straightforward way to translate on-premises workload requirements to the cloud.
Serverless compute
The SQL Database server-less compute tier optimises price-performance and simplifies performance management for single databases with intermittent, unpredictable usage by auto-scaling compute and billing for compute used per second. For details, see the FAQ section and documentation.
Hyperscale
Serverless Hyperscale combines the benefits of compute auto-scaling with storage auto-scaling up to 100 TB to help you optimise price-performance of your database resources to meet your workload's needs. If zone redundancy is enabled, the database must have at least one high availability (HA) replica. The pricing below is applicable for both primary and secondary replicas.
Standard-series (Gen 5)
Standard-series (Gen 5) logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz, Intel SP8160 (Skylake), Intel Xeon Platinum 8272CL 2.5 GHz (Cascade Lake) and Intel(R) Xeon Scalable 2.8 GHz processor (Ice Lake) processors. In the standard-series (Gen 5), 1 vCore = 1 hyper thread. The standard-series (Gen 5) logical CPU is great for most relational database servers.
Primary replica pricing
Minimum vCores Maximum vCores Minimum Memory (GB) Maximum Memory (GB) Price
0.5 80 2.05 240 $0.0001050/vCore-second
($0.378/vCore-hour)
High Availability Replica and Named Replica Pricing
Minimum vCores Maximum vCores Minimum Memory (GB) Maximum Memory (GB) Price
0.5 80 2.05 240 $0.0001050/vCore-second
($0.378/vCore-hour)
Storage
In the Hyperscale tier, you are charged for storage for your database based on actual allocation. Storage is dynamically allocated between 10 GB and 100 TB, in 10 GB increments.
Storage Price
GB/month $0.25
Back up storage (point-in-time restore)
By default, seven days of backups are stored in RA-GRS Standard blob storage. Any corrupted or deleted database can be restored to any point in time within that period. The storage is used by periodic storage blob snapshots and all generated transaction log. The usage of the backup storage depends on the rate of change of the database and the configured retention period. Back up storage consumption will be charged in GB/month. Learn more about automated backups, and how to monitor and manage backup costs.
Redundancy Price
LRS $0.08/GB/month
ZRS $0.10/GB/month
RA-GRS $0.20/GB/month
Provisioned compute
The SQL Database provisioned compute tier provides a fixed amount of compute resource for a fixed price billed hourly. It optimises price-performance for single databases and elastic pools with more regular usage that cannot afford any delay in compute warm-up after idle usage periods. For details, see the FAQ section and documentation.
Hyperscale
Build new, highly scalable cloud applications on Azure SQL Database Hyperscale. Hyperscale provides rapid, auto-scaling storage up to 100 TB to help you optimise database resources for your workload's needs. To enable zone redundancy, the database must have at least one secondary high availability replica. The pricing below is applicable for both primary and secondary replicas.
Standard-series (Gen 5)
Standard-series (Gen 5) logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz, Intel SP8160 (Skylake), Intel Xeon Platinum 8272CL 2.5 GHz (Cascade Lake) and Intel(R) Xeon Scalable 2.8 GHz processor (Ice Lake) processors. In the standard-series (Gen 5), 1 vCore = 1 hyper thread. The standard-series (Gen 5) logical CPU is great for most relational database servers.
vCORE Memory (GB) Pay as you go 1-year reserved capacity 1 3 year reserved capacity 1
2 10.2 $0.366/hour $0.238/hour
~35% savings $0.165/hour
~55% savings
4 20.4 $0.731/hour $0.475/hour
~35% savings $0.329/hour
~55% savings
6 30.6 $1.096/hour $0.713/hour
~35% savings $0.494/hour
~55% savings
8 40.8 $1.462/hour $0.950/hour
~35% savings $0.658/hour
~55% savings
10 51 $1.827/hour $1.188/hour
~35% savings $0.822/hour
~55% savings
12 61.2 $2.192/hour $1.425/hour
~35% savings $0.987/hour
~55% savings
14 71.4 $2.558/hour $1.663/hour
~35% savings $1.151/hour
~55% savings
16 81.6 $2.923/hour $1.900/hour
~35% savings $1.316/hour
~55% savings
18 91.8 $3.288/hour $2.137/hour
~35% savings $1.480/hour
~55% savings
20 102 $3.654/hour $2.375/hour
~35% savings $1.644/hour
~55% savings
24 122.4 $4.384/hour $2.850/hour
~35% savings $1.973/hour
~55% savings
32 163.2 $5.846/hour $3.800/hour
~35% savings $2.631/hour
~55% savings
40 204 $7.307/hour $4.749/hour
~35% savings $3.288/hour
~55% savings
80 396 $14.613/hour $9.498/hour
~35% savings $6.576/hour
~55% savings
1Learn more about Azure reservations and Azure SQL Database reserved capacity pricing.
Compute is provisioned in virtual cores (vCores) with an option to choose between compute generations.
DC-series
The DC-series logical CPUs are based on Intel XEON E-2288G processors with Software Guard Extensions (Intel SGX) technology. In the DC-series, 1 vCore = 1 physical core. DC-series supports Always Encrypted with secure enclaves and it is designed to for workloads that process sensitive data and demand confidential query processing capabilities.
vCORE Memory (GB) Pay as you go
2 9 $0.73/hour
4 18 $1.46/hour
6 27 $2.19/hour
8 36 $2.92/hour
10 45 $3.65/hour
12 54 $4.38/hour
14 63 $5.11/hour
16 72 $5.84/hour
18 81 $6.57/hour
20 90 $7.30/hour
32 144 $11.68/hour
40 180 $14.60/hour
This hardware option is subject to regional availability. See our documentation for the latest list of available regions.
Compute is provisioned in virtual cores (vCores). A vCore represents a logical CPU offered with an option to choose between compute generations.
Premium-series
Premium-series logical CPUs are based on the latest Intel(R) Xeon (Ice Lake) and AMD EPYCTM 7763v (Milan) chipsets, 1 vCore = 1 hyper thread. The premium-series logical CPU is a great fit for database workloads that require faster compute and memory performance as well as improved IO and network experience over the standard-series hardware offering.
vCORE Memory (GB) Pay as you go 1-year reserved capacity 1
2 10.4 $0.366/hour $0.238/hour
~35% savings
4 20.8 $0.731/hour $0.475/hour
~35% savings
6 31.1 $1.096/hour $0.713/hour
~35% savings
8 41.5 $1.462/hour $0.950/hour
~35% savings
10 51.9 $1.827/hour $1.188/hour
~35% savings
12 62.3 $2.192/hour $1.425/hour
~35% savings
14 72.7 $2.558/hour $1.663/hour
~35% savings
16 83 $2.923/hour $1.900/hour
~35% savings
18 93.4 $3.288/hour $2.137/hour
~35% savings
20 103.8 $3.654/hour $2.375/hour
~35% savings
24 124.6 $4.384/hour $2.850/hour
~35% savings
32 166.1 $5.846/hour $3.800/hour
~35% savings
40 207.6 $7.307/hour $4.749/hour
~35% savings
64 664.4 $11.691/hour $7.599/hour
~35% savings
80 415.2 $14.613/hour $9.498/hour
~35% savings
128 647.8 $23.381/hour $15.197/hour
~35% savings
1Learn more about Azure reservations and Azure SQL Database reserved capacity pricing.
Compute is provisioned in virtual cores (vCores). A vCore represents a logical CPU offered with an option to choose between compute generations. | [Task Description]
=======
Respond succinctly and directly. Refer only to the provided document. After your answer, provide any relevant quotes from the source document in italics.
----------------
[Text]
=======
**Microsoft Azure SQL Database pricing**
vCore
A vCore-based purchase model is best if you are looking for flexibility, control and transparency of individual resource consumption. This model allows you to scale compute, memory and storage based upon your workload needs and provides a straightforward way to translate on-premises workload requirements to the cloud.
Serverless compute
The SQL Database server-less compute tier optimises price-performance and simplifies performance management for single databases with intermittent, unpredictable usage by auto-scaling compute and billing for compute used per second. For details, see the FAQ section and documentation.
Hyperscale
Serverless Hyperscale combines the benefits of compute auto-scaling with storage auto-scaling up to 100 TB to help you optimise price-performance of your database resources to meet your workload's needs. If zone redundancy is enabled, the database must have at least one high availability (HA) replica. The pricing below is applicable for both primary and secondary replicas.
Standard-series (Gen 5)
Standard-series (Gen 5) logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz, Intel SP8160 (Skylake), Intel Xeon Platinum 8272CL 2.5 GHz (Cascade Lake) and Intel(R) Xeon Scalable 2.8 GHz processor (Ice Lake) processors. In the standard-series (Gen 5), 1 vCore = 1 hyper thread. The standard-series (Gen 5) logical CPU is great for most relational database servers.
Primary replica pricing
Minimum vCores Maximum vCores Minimum Memory (GB) Maximum Memory (GB) Price
0.5 80 2.05 240 $0.0001050/vCore-second
($0.378/vCore-hour)
High Availability Replica and Named Replica Pricing
Minimum vCores Maximum vCores Minimum Memory (GB) Maximum Memory (GB) Price
0.5 80 2.05 240 $0.0001050/vCore-second
($0.378/vCore-hour)
Storage
In the Hyperscale tier, you are charged for storage for your database based on actual allocation. Storage is dynamically allocated between 10 GB and 100 TB, in 10 GB increments.
Storage Price
GB/month $0.25
Back up storage (point-in-time restore)
By default, seven days of backups are stored in RA-GRS Standard blob storage. Any corrupted or deleted database can be restored to any point in time within that period. The storage is used by periodic storage blob snapshots and all generated transaction log. The usage of the backup storage depends on the rate of change of the database and the configured retention period. Back up storage consumption will be charged in GB/month. Learn more about automated backups, and how to monitor and manage backup costs.
Redundancy Price
LRS $0.08/GB/month
ZRS $0.10/GB/month
RA-GRS $0.20/GB/month
Provisioned compute
The SQL Database provisioned compute tier provides a fixed amount of compute resource for a fixed price billed hourly. It optimises price-performance for single databases and elastic pools with more regular usage that cannot afford any delay in compute warm-up after idle usage periods. For details, see the FAQ section and documentation.
Hyperscale
Build new, highly scalable cloud applications on Azure SQL Database Hyperscale. Hyperscale provides rapid, auto-scaling storage up to 100 TB to help you optimise database resources for your workload's needs. To enable zone redundancy, the database must have at least one secondary high availability replica. The pricing below is applicable for both primary and secondary replicas.
Standard-series (Gen 5)
Standard-series (Gen 5) logical CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz, Intel SP8160 (Skylake), Intel Xeon Platinum 8272CL 2.5 GHz (Cascade Lake) and Intel(R) Xeon Scalable 2.8 GHz processor (Ice Lake) processors. In the standard-series (Gen 5), 1 vCore = 1 hyper thread. The standard-series (Gen 5) logical CPU is great for most relational database servers.
vCORE Memory (GB) Pay as you go 1-year reserved capacity 1 3 year reserved capacity 1
2 10.2 $0.366/hour $0.238/hour
~35% savings $0.165/hour
~55% savings
4 20.4 $0.731/hour $0.475/hour
~35% savings $0.329/hour
~55% savings
6 30.6 $1.096/hour $0.713/hour
~35% savings $0.494/hour
~55% savings
8 40.8 $1.462/hour $0.950/hour
~35% savings $0.658/hour
~55% savings
10 51 $1.827/hour $1.188/hour
~35% savings $0.822/hour
~55% savings
12 61.2 $2.192/hour $1.425/hour
~35% savings $0.987/hour
~55% savings
14 71.4 $2.558/hour $1.663/hour
~35% savings $1.151/hour
~55% savings
16 81.6 $2.923/hour $1.900/hour
~35% savings $1.316/hour
~55% savings
18 91.8 $3.288/hour $2.137/hour
~35% savings $1.480/hour
~55% savings
20 102 $3.654/hour $2.375/hour
~35% savings $1.644/hour
~55% savings
24 122.4 $4.384/hour $2.850/hour
~35% savings $1.973/hour
~55% savings
32 163.2 $5.846/hour $3.800/hour
~35% savings $2.631/hour
~55% savings
40 204 $7.307/hour $4.749/hour
~35% savings $3.288/hour
~55% savings
80 396 $14.613/hour $9.498/hour
~35% savings $6.576/hour
~55% savings
1Learn more about Azure reservations and Azure SQL Database reserved capacity pricing.
Compute is provisioned in virtual cores (vCores) with an option to choose between compute generations.
DC-series
The DC-series logical CPUs are based on Intel XEON E-2288G processors with Software Guard Extensions (Intel SGX) technology. In the DC-series, 1 vCore = 1 physical core. DC-series supports Always Encrypted with secure enclaves and it is designed to for workloads that process sensitive data and demand confidential query processing capabilities.
vCORE Memory (GB) Pay as you go
2 9 $0.73/hour
4 18 $1.46/hour
6 27 $2.19/hour
8 36 $2.92/hour
10 45 $3.65/hour
12 54 $4.38/hour
14 63 $5.11/hour
16 72 $5.84/hour
18 81 $6.57/hour
20 90 $7.30/hour
32 144 $11.68/hour
40 180 $14.60/hour
This hardware option is subject to regional availability. See our documentation for the latest list of available regions.
Compute is provisioned in virtual cores (vCores). A vCore represents a logical CPU offered with an option to choose between compute generations.
Premium-series
Premium-series logical CPUs are based on the latest Intel(R) Xeon (Ice Lake) and AMD EPYCTM 7763v (Milan) chipsets, 1 vCore = 1 hyper thread. The premium-series logical CPU is a great fit for database workloads that require faster compute and memory performance as well as improved IO and network experience over the standard-series hardware offering.
vCORE Memory (GB) Pay as you go 1-year reserved capacity 1
2 10.4 $0.366/hour $0.238/hour
~35% savings
4 20.8 $0.731/hour $0.475/hour
~35% savings
6 31.1 $1.096/hour $0.713/hour
~35% savings
8 41.5 $1.462/hour $0.950/hour
~35% savings
10 51.9 $1.827/hour $1.188/hour
~35% savings
12 62.3 $2.192/hour $1.425/hour
~35% savings
14 72.7 $2.558/hour $1.663/hour
~35% savings
16 83 $2.923/hour $1.900/hour
~35% savings
18 93.4 $3.288/hour $2.137/hour
~35% savings
20 103.8 $3.654/hour $2.375/hour
~35% savings
24 124.6 $4.384/hour $2.850/hour
~35% savings
32 166.1 $5.846/hour $3.800/hour
~35% savings
40 207.6 $7.307/hour $4.749/hour
~35% savings
64 664.4 $11.691/hour $7.599/hour
~35% savings
80 415.2 $14.613/hour $9.498/hour
~35% savings
128 647.8 $23.381/hour $15.197/hour
~35% savings
1Learn more about Azure reservations and Azure SQL Database reserved capacity pricing.
Compute is provisioned in virtual cores (vCores). A vCore represents a logical CPU offered with an option to choose between compute generations.
----------------
[Query]
=======
What are the 5th gen Standard Series CPUs based on? |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | I am a high school teacher and I've been concerned about my students using generative AI to complete their assignments. I'm afraid that they are finding the easy way to get a passing grade without putting in the effort, and this will result in them not learning anything. To find a way to deal with this, I've been reading some articles and I found one that has several interesting points. What are some ways in which, as a teacher, I can use GenAI to my advantage? What concerns should I be aware of regarding the use of GenAI by students? | Generative AI (GenAI) can be defined as a “technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, words) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)”. As generative Artificial Intelligence (AI) continues to evolve rapidly, in the next few years, it will drive innovation and improvements in higher education, but it will also create a myriad of new challenges. Specifically, ChatGPT (Chat Generative Pre-Trained Transformer), a chatbot driven by GenAI, has been attracting headlines and has become the center of ongoing debate regarding the potential negative effects that it can have on teaching and learning. ChatGPT describes itself as a large language model trained to “generate humanlike text based on a given prompt or context. It can be used for a variety of natural language processing tasks, such as text completion, conversation generation, and language translation”. Given its advanced generative skills, one of the major concerns in higher education is that it can be used to reply to exam questions, write assignments and draft academic essays without being easily detected by current versions of anti-plagiarism software.
Responses from higher education institutions (HEIs) to this emerging threat to academic integrity have been varied and fragmented, ranging from those that have rushed to implement full bans on the use of ChatGPT to others who have started to embrace it by publishing student guidance on how to engage with AI effectively and ethically. Nevertheless, most of the information provided by higher education institutions (HEIs) to students so far has been unclear or lacking in detail regarding the specific circumstances in which the use of ChatGPT is allowed or considered acceptable. However, what is evident is that most HEIs are currently in the process of reviewing their policies around the use of ChatGPT and its implications for academic integrity.
Meanwhile, a growing body of literature has started to document the potential challenges and opportunities posed by ChatGPT. Among the key issues with the use of ChatGPT in education, accuracy, reliability, and plagiarism are regularly cited. Issues related to accuracy and reliability include relying on biased data (i.e., the limited scope of data used to train ChatGPT), having limited up-to-date knowledge (i.e., training stopped in 2021), and generating incorrect/fake information (e.g., providing fictitious references). It is also argued that the risk of overreliance on ChatGPT could negatively impact students’ critical thinking and problem-solving skills. Regarding plagiarism, evidence suggests that essays generated by ChatGPT can bypass conventional plagiarism detectors. ChatGPT can also successfully pass graduate-level exams, which could potentially make some types of assessments obsolete.
ChatGPT can also be used to enhance education, provided that its limitations (as discussed in the previous paragraph) are recognized. For instance, ChatGPT can be used as a tool to generate answers to theory-based questions and generate initial ideas for essays, but students should be mindful of the need to examine the credibility of generated responses. Given its advanced conversational skills, ChatGPT can also provide formative feedback on essays and become a tutoring system by stimulating critical thinking and debates among students. The language editing and translation skills of ChatGPT can also contribute towards increased equity in education by somewhat leveling the playing field for students from non-English speaking backgrounds. ChatGPT can also be a valuable tool for educators as it can help in creating lesson plans for specific courses, developing customized resources and learning activities (i.e., personalized learning support), carrying out assessment and evaluation, and supporting the writing process of research. ChatGPT might also be used to enrich a reflective teaching practice by testing existing assessment methods to validate their scope, design, and capabilities beyond the possible use of GenAI, challenging academics to develop AI-proof assessments as a result and contributing to the authentic assessment of students’ learning achievements.
Overall, some early studies have started to shed some light regarding the potential challenges and opportunities of ChatGPT for higher education, but more in-depth discussions are needed. We argue that the current discourse is highly focused on studying ChatGPT as an object rather than a subject. Given the advanced generative capabilities of ChatGPT, we would like to contribute to the ongoing discussion by exploring what ChatGPT has to say about itself regarding the challenges and opportunities that it represents for higher education. By adopting this approach, we hope to contribute to a more balanced discussion that accommodates the AI perspective using a ‘thing ethnography’ methodology. This approach considers things not as objects but as subjects that possess a non-human worldview or perspective that can point to novel insights in research. | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
I am a high school teacher and I've been concerned about my students using generative AI to complete their assignments. I'm afraid that they are finding the easy way to get a passing grade without putting in the effort, and this will result in them not learning anything. To find a way to deal with this, I've been reading some articles and I found one that has several interesting points. What are some ways in which, as a teacher, I can use GenAI to my advantage? What concerns should I be aware of regarding the use of GenAI by students?
{passage 0}
==========
Generative AI (GenAI) can be defined as a “technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, words) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)”. As generative Artificial Intelligence (AI) continues to evolve rapidly, in the next few years, it will drive innovation and improvements in higher education, but it will also create a myriad of new challenges. Specifically, ChatGPT (Chat Generative Pre-Trained Transformer), a chatbot driven by GenAI, has been attracting headlines and has become the center of ongoing debate regarding the potential negative effects that it can have on teaching and learning. ChatGPT describes itself as a large language model trained to “generate humanlike text based on a given prompt or context. It can be used for a variety of natural language processing tasks, such as text completion, conversation generation, and language translation”. Given its advanced generative skills, one of the major concerns in higher education is that it can be used to reply to exam questions, write assignments and draft academic essays without being easily detected by current versions of anti-plagiarism software.
Responses from higher education institutions (HEIs) to this emerging threat to academic integrity have been varied and fragmented, ranging from those that have rushed to implement full bans on the use of ChatGPT to others who have started to embrace it by publishing student guidance on how to engage with AI effectively and ethically. Nevertheless, most of the information provided by higher education institutions (HEIs) to students so far has been unclear or lacking in detail regarding the specific circumstances in which the use of ChatGPT is allowed or considered acceptable. However, what is evident is that most HEIs are currently in the process of reviewing their policies around the use of ChatGPT and its implications for academic integrity.
Meanwhile, a growing body of literature has started to document the potential challenges and opportunities posed by ChatGPT. Among the key issues with the use of ChatGPT in education, accuracy, reliability, and plagiarism are regularly cited. Issues related to accuracy and reliability include relying on biased data (i.e., the limited scope of data used to train ChatGPT), having limited up-to-date knowledge (i.e., training stopped in 2021), and generating incorrect/fake information (e.g., providing fictitious references). It is also argued that the risk of overreliance on ChatGPT could negatively impact students’ critical thinking and problem-solving skills. Regarding plagiarism, evidence suggests that essays generated by ChatGPT can bypass conventional plagiarism detectors. ChatGPT can also successfully pass graduate-level exams, which could potentially make some types of assessments obsolete.
ChatGPT can also be used to enhance education, provided that its limitations (as discussed in the previous paragraph) are recognized. For instance, ChatGPT can be used as a tool to generate answers to theory-based questions and generate initial ideas for essays, but students should be mindful of the need to examine the credibility of generated responses. Given its advanced conversational skills, ChatGPT can also provide formative feedback on essays and become a tutoring system by stimulating critical thinking and debates among students. The language editing and translation skills of ChatGPT can also contribute towards increased equity in education by somewhat leveling the playing field for students from non-English speaking backgrounds. ChatGPT can also be a valuable tool for educators as it can help in creating lesson plans for specific courses, developing customized resources and learning activities (i.e., personalized learning support), carrying out assessment and evaluation, and supporting the writing process of research. ChatGPT might also be used to enrich a reflective teaching practice by testing existing assessment methods to validate their scope, design, and capabilities beyond the possible use of GenAI, challenging academics to develop AI-proof assessments as a result and contributing to the authentic assessment of students’ learning achievements.
Overall, some early studies have started to shed some light regarding the potential challenges and opportunities of ChatGPT for higher education, but more in-depth discussions are needed. We argue that the current discourse is highly focused on studying ChatGPT as an object rather than a subject. Given the advanced generative capabilities of ChatGPT, we would like to contribute to the ongoing discussion by exploring what ChatGPT has to say about itself regarding the challenges and opportunities that it represents for higher education. By adopting this approach, we hope to contribute to a more balanced discussion that accommodates the AI perspective using a ‘thing ethnography’ methodology. This approach considers things not as objects but as subjects that possess a non-human worldview or perspective that can point to novel insights in research.
https://www.mdpi.com/2227-7102/13/9/856 |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | the first time I had c diff I took vancomycin to treat it but i had an allergic reaction. i have c diff again. what can make it go away? I am retired. please only list affordable options. | Current Pharmacologic Options
There are 3 antibiotics available for treatment of CDI: metronidazole, vancomycin, and fidaxomicin. All have demonstrated efficacy with similar rates of cure in nonsevere disease [18]. However, use of fidaxomicin, with a narrower spectrum of antimicrobial activity, has been shown to result in lower recurrence rates [19]. Based on this benefit, the Society for Healthcare Epidemiology of America (SHEA) and the Infectious Disease Society of America (IDSA) clinical practice guidelines position fidaxomicin above vancomycin for treatment of an initial CDI [20]. The guidelines published by the American College of Gastroenterology (ACG) differ, stating that either vancomycin or fidaxomicin is appropriate to treat an initial CDI [21]. This recommendation was based on comparable efficacy data with far lower costs of vancomycin compared with fidaxomicin.
In severe infections, treatment with metronidazole was shown to be associated with increased 30-day mortality [22]. In addition, C. difficile isolates with reduced susceptibility to metronidazole and treatment failures with this agent are increasing [23]. Therefore, use of metronidazole to treat patients with severe disease or those who are older or have comorbidities is not recommended by either practice guideline, though ACG guidelines maintain that metronidazole is appropriate to treat nonsevere infections in younger patients without comorbidities. SHEA/IDSA guidelines discourage its use altogether.
Bezolotoxumab is a monoclonal antibody that binds to toxin B and has been shown to reduce CDI recurrence in patients at high risk of recurrence [24]. Neutralization of the toxin while the antibody remains in circulation may prevent symptoms in the event of C. difficile regrowth after completion of antibiotic therapy. There are significant drug and infusion costs to use of this agent, and clinical trials demonstrated a number needed to treat of 10 to prevent 1 recurrent CDI. The Gastroenterology Society treatment guidelines recommend considering use of this agent for patients in whom the observed clinical benefits were greatest, including those aged ≥65 years with at least 1 of the following additional risk factors: experiencing their second episode of CDI within the past 6 months, immunocompromised, or with severe CDI [21]. SHEA/IDSA guidelines similarly recommend use be reserved for patients with risk factors for recurrence.
Fecal Microbiota Transplantation: History and Current Status
FMT has emerged as a safe and effective therapy for CDI and is now recommended in treatment guidelines after a second recurrence [21]. Initially described by the surgeon Ben Eiseman as a treatment for pseudomembranous enterocolitis in 1958 [25] with several additional successful case reports over the years [26], it was viewed as a treatment of “last resort” until rising numbers of severe and recurrent cases of C. difficile in the early 2000s. A 2004 publication by gastroenterologist Thomas Borody promoted the use of screened donor stool as “bacteriotherapy” for CDI and other gastrointestinal conditions [27]. Interest in the treatment began to gain momentum, yet through 2008, only 100 patients comprised the world's literature of recurrent CDI treated with bacteriotherapy [28].
In 2010, Yoon et al reported 100% success in 12 patients treated with donor stool administered during colonoscopy [29], and this method was rapidly adopted by other gastroenterologists as a well-tolerated and effective method. By 2012, other investigators were reporting similar high rates of cure using the colonoscopic approach in larger case series [30, 31], and the first long-term follow-up study supported the treatment as durable and safe [32]. Mechanisms of effect were also being investigated with 16s rRNA sequencing to characterize recipient's stool before and after the procedure and showed dramatic effects on the composition of the gut microbiome [33]. In the interest of standardizing the approach to donor selection and screening and administration protocols, a multidisciplinary working group was formed. This group, comprised of gastroenterologists and infectious diseases specialists, coined the term “fecal microbiota transplant” and published the first guidance for clinicians [34]. In January 2013, results from the first randomized, controlled trial of FMT for treatment of recurrent CDI were reported [35]. This landmark Dutch study was stopped at the interim analysis after duodenal infusion of a preparation of donor feces was found to be far superior to standard-of-care oral vancomycin in preventing further CDI recurrence. This study was widely publicized, increasing awareness among patients and members of the medical community and reassuring physicians about the efficacy and safety of the procedure (Figure 1).
Prompted by growing interest in FMT for CDI and other applications, the US Food and Drug Administration (FDA) convened a public workshop in 2013 titled “Fecal Microbiota for Transplantation” with the purpose of exchanging information with the medical and scientific communities about the regulatory and scientific issues associated FMT. Clinicians, scientists, and patient advocates were invited to speak and present data on the gut microbiome, the epidemiology and treatment of recurrent CDI and its impact on patients, and FMT for CDI. At the meeting’s conclusion, it was announced that the FDA intended to regulate fecal microbiota as a biologic drug. As such, it was unapproved, and an investigational new drug (IND) application would be required to administer or conduct clinical trials on FMT. In subsequent communications, physicians and scientists expressed concern that the IND requirement was burdensome to physicians and would adversely affect the availability of FMT to patients who were suffering with recurrent CDI. In acknowledgment of these concerns, the FDA announced the policy of “enforcement discretion,” which permitted FMT to be done for patients suffering from CDI who had not responded to standard therapies, provided they were given informed consent stating that FMT is investigational and discussing potential risks [36].
Around this time, OpenBiome, a nonprofit stool bank founded by a team of physicians, microbiologists, and public health experts, was established and began to provide screened donor material for FMT. Operating under enforcement discretion, OpenBiome centralized the process of donor testing, stool donation, and processing and shipped preparations of frozen donor material to clinicians for use in FMT. With extensive donor health screenings and serologic and stool testing, infection transmission risk was minimized. With this convenient source of donor stool, FMT was facilitated; by 2018, OpenBiome had shipped 10 000 doses and partnered with investigators to conduct research around FMT and the gut microbiome. The widespread adoption of stool banks was not anticipated by FDA, and members of industry argued that the availability of donor stool under enforcement discretion was impacting enrollment in clinical trials of live biotherapeutic products. In response, the agency issued a draft guidance in 2016 that would require stool banks to adhere to IND requirements in order to distribute FMT products [37]. This would remain in draft form for several years while public comments were elicited and LBPs for treatment of CDI remained in clinical development.
Gastroenterologists continued to work with collaborators from other disciplines to contribute to discoveries and innovation in the field, and the years 2013 through 2019 saw great advances in knowledge around FMT. A multicenter, retrospective series on the use of FMT in immunocompromised recipients demonstrated the effective use of FMT for CDI in this population with few serious adverse events and no related infectious complications in these high-risk patients [38]. Additional series showed the effectiveness for CDI in patients with inflammatory bowel disease (IBD) [39–41], solid organ transplant recipients [42], and elderly individuals [43] and examined factors predictive of FMT failure [44]. The first placebo-controlled trial of colonoscopically administered FMT was published in 2016 [45]. This study enrolled 46 patients with multiple recurrent CDI. Patients were randomized after completing a course of oral vancomycin to treat the most recent episode and received FMT using donor stool or autologous FMT (as placebo). In the intention-to-treat analysis, 20 of 22 patients (90.9%) in the donor FMT group achieved clinical cure at 8 weeks compared with 15 of 24 (62.5%) in the autologous FMT group (P = .042). | [question]
the first time I had c diff I took vancomycin to treat it but i had an allergic reaction. i have c diff again. what can make it go away? I am retired. please only list affordable options.
=====================
[text]
Current Pharmacologic Options
There are 3 antibiotics available for treatment of CDI: metronidazole, vancomycin, and fidaxomicin. All have demonstrated efficacy with similar rates of cure in nonsevere disease [18]. However, use of fidaxomicin, with a narrower spectrum of antimicrobial activity, has been shown to result in lower recurrence rates [19]. Based on this benefit, the Society for Healthcare Epidemiology of America (SHEA) and the Infectious Disease Society of America (IDSA) clinical practice guidelines position fidaxomicin above vancomycin for treatment of an initial CDI [20]. The guidelines published by the American College of Gastroenterology (ACG) differ, stating that either vancomycin or fidaxomicin is appropriate to treat an initial CDI [21]. This recommendation was based on comparable efficacy data with far lower costs of vancomycin compared with fidaxomicin.
In severe infections, treatment with metronidazole was shown to be associated with increased 30-day mortality [22]. In addition, C. difficile isolates with reduced susceptibility to metronidazole and treatment failures with this agent are increasing [23]. Therefore, use of metronidazole to treat patients with severe disease or those who are older or have comorbidities is not recommended by either practice guideline, though ACG guidelines maintain that metronidazole is appropriate to treat nonsevere infections in younger patients without comorbidities. SHEA/IDSA guidelines discourage its use altogether.
Bezolotoxumab is a monoclonal antibody that binds to toxin B and has been shown to reduce CDI recurrence in patients at high risk of recurrence [24]. Neutralization of the toxin while the antibody remains in circulation may prevent symptoms in the event of C. difficile regrowth after completion of antibiotic therapy. There are significant drug and infusion costs to use of this agent, and clinical trials demonstrated a number needed to treat of 10 to prevent 1 recurrent CDI. The Gastroenterology Society treatment guidelines recommend considering use of this agent for patients in whom the observed clinical benefits were greatest, including those aged ≥65 years with at least 1 of the following additional risk factors: experiencing their second episode of CDI within the past 6 months, immunocompromised, or with severe CDI [21]. SHEA/IDSA guidelines similarly recommend use be reserved for patients with risk factors for recurrence.
Fecal Microbiota Transplantation: History and Current Status
FMT has emerged as a safe and effective therapy for CDI and is now recommended in treatment guidelines after a second recurrence [21]. Initially described by the surgeon Ben Eiseman as a treatment for pseudomembranous enterocolitis in 1958 [25] with several additional successful case reports over the years [26], it was viewed as a treatment of “last resort” until rising numbers of severe and recurrent cases of C. difficile in the early 2000s. A 2004 publication by gastroenterologist Thomas Borody promoted the use of screened donor stool as “bacteriotherapy” for CDI and other gastrointestinal conditions [27]. Interest in the treatment began to gain momentum, yet through 2008, only 100 patients comprised the world's literature of recurrent CDI treated with bacteriotherapy [28].
In 2010, Yoon et al reported 100% success in 12 patients treated with donor stool administered during colonoscopy [29], and this method was rapidly adopted by other gastroenterologists as a well-tolerated and effective method. By 2012, other investigators were reporting similar high rates of cure using the colonoscopic approach in larger case series [30, 31], and the first long-term follow-up study supported the treatment as durable and safe [32]. Mechanisms of effect were also being investigated with 16s rRNA sequencing to characterize recipient's stool before and after the procedure and showed dramatic effects on the composition of the gut microbiome [33]. In the interest of standardizing the approach to donor selection and screening and administration protocols, a multidisciplinary working group was formed. This group, comprised of gastroenterologists and infectious diseases specialists, coined the term “fecal microbiota transplant” and published the first guidance for clinicians [34]. In January 2013, results from the first randomized, controlled trial of FMT for treatment of recurrent CDI were reported [35]. This landmark Dutch study was stopped at the interim analysis after duodenal infusion of a preparation of donor feces was found to be far superior to standard-of-care oral vancomycin in preventing further CDI recurrence. This study was widely publicized, increasing awareness among patients and members of the medical community and reassuring physicians about the efficacy and safety of the procedure (Figure 1).
Prompted by growing interest in FMT for CDI and other applications, the US Food and Drug Administration (FDA) convened a public workshop in 2013 titled “Fecal Microbiota for Transplantation” with the purpose of exchanging information with the medical and scientific communities about the regulatory and scientific issues associated FMT. Clinicians, scientists, and patient advocates were invited to speak and present data on the gut microbiome, the epidemiology and treatment of recurrent CDI and its impact on patients, and FMT for CDI. At the meeting’s conclusion, it was announced that the FDA intended to regulate fecal microbiota as a biologic drug. As such, it was unapproved, and an investigational new drug (IND) application would be required to administer or conduct clinical trials on FMT. In subsequent communications, physicians and scientists expressed concern that the IND requirement was burdensome to physicians and would adversely affect the availability of FMT to patients who were suffering with recurrent CDI. In acknowledgment of these concerns, the FDA announced the policy of “enforcement discretion,” which permitted FMT to be done for patients suffering from CDI who had not responded to standard therapies, provided they were given informed consent stating that FMT is investigational and discussing potential risks [36].
Around this time, OpenBiome, a nonprofit stool bank founded by a team of physicians, microbiologists, and public health experts, was established and began to provide screened donor material for FMT. Operating under enforcement discretion, OpenBiome centralized the process of donor testing, stool donation, and processing and shipped preparations of frozen donor material to clinicians for use in FMT. With extensive donor health screenings and serologic and stool testing, infection transmission risk was minimized. With this convenient source of donor stool, FMT was facilitated; by 2018, OpenBiome had shipped 10 000 doses and partnered with investigators to conduct research around FMT and the gut microbiome. The widespread adoption of stool banks was not anticipated by FDA, and members of industry argued that the availability of donor stool under enforcement discretion was impacting enrollment in clinical trials of live biotherapeutic products. In response, the agency issued a draft guidance in 2016 that would require stool banks to adhere to IND requirements in order to distribute FMT products [37]. This would remain in draft form for several years while public comments were elicited and LBPs for treatment of CDI remained in clinical development.
Gastroenterologists continued to work with collaborators from other disciplines to contribute to discoveries and innovation in the field, and the years 2013 through 2019 saw great advances in knowledge around FMT. A multicenter, retrospective series on the use of FMT in immunocompromised recipients demonstrated the effective use of FMT for CDI in this population with few serious adverse events and no related infectious complications in these high-risk patients [38]. Additional series showed the effectiveness for CDI in patients with inflammatory bowel disease (IBD) [39–41], solid organ transplant recipients [42], and elderly individuals [43] and examined factors predictive of FMT failure [44]. The first placebo-controlled trial of colonoscopically administered FMT was published in 2016 [45]. This study enrolled 46 patients with multiple recurrent CDI. Patients were randomized after completing a course of oral vancomycin to treat the most recent episode and received FMT using donor stool or autologous FMT (as placebo). In the intention-to-treat analysis, 20 of 22 patients (90.9%) in the donor FMT group achieved clinical cure at 8 weeks compared with 15 of 24 (62.5%) in the autologous FMT group (P = .042).
https://academic.oup.com/cid/article/77/Supplement_6/S463/7459148
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Answer the prompt using only the provide text and nothing else. Your answer must be at least 4 sentences but no more than 6 sentences. Your answer must also be in paragraph format with no lists. | I have a year and a month left on my lease and my landlord is going to evict me even though I've paid my rent, how long do I have to leave after they actually start the process? | Consumer Guide to Tenant and Landlord Rights
Version 1.1. Last updated June 13, 2022. 11
a claim of housing discrimination based on a protected category.
V. ENDING A LEASE
A periodic lease (month-to-month or year-to-year) most often ends in one of three ways:
• You move out before the end of the term
• You or the landlord gives notice 15 days before the end of the term that the lease will
not renew
• You or the landlord materially breaches the lease
A lease for a fixed period of time most often ends in one of three ways:
• You move out before the end of the term
• The term of the lease expires and the parties do not agree to renew
• You or the landlord materially breaches the lease
A. Early Termination
There is no stand-alone right to terminate a lease early, and many lease agreements do not
allow a tenant to terminate early. If you voluntarily move out before the end of the lease, the
lease does not allow for early termination, and the landlord has not breached any of their
obligations, then you will likely be responsible for paying rent until the lease expires or until the
landlord rents the unit to a new tenant.
In Pennsylvania, the landlord has no obligation to locate a new tenant to rent the unit. If you
move out early, the landlord may be able to make you pay rent for the rest of the lease term.
B. Security Deposit Refund and Deductions
To have your security deposit refunded, you must provide the landlord with a forwarding
address and return the keys to the property. 68 P.S. § 250.512(e). Before leaving, clean the unit
as thoroughly as possible and take photos to document its condition.
Within 30 days after you have moved out, the landlord must either return the entire security
deposit or send you a list of damages, the cost of repairs, and any money remaining from the
security deposit. 68 P.S. § 250.512(a). Permissible deductions include “actual damages” to the
rental unit. Id.
If the landlord does not provide a written list of damages within 30 days, they may not keep any
part of the security deposit. 68 P.S. § 250.512(b). You may then sue to recover double the
amount of the deposit minus any actual damages as determined by a court. 68 P.S. §
250.512(c).
If, within 30 days, the landlord fails to pay you the difference between the security deposit and
the actual damages to the property, the landlord is liable for double the amount by which the
Consumer Guide to Tenant and Landlord Rights
Version 1.1. Last updated June 13, 2022. 12
security deposit exceeds the actual damages to the property. 68 P.S. § 250.512(b)-(c).
If you break the lease, the security deposit may be forfeited. 68 P.S. § 250.512(a).
C. Eviction
A landlord may seek to evict you if you fail to pay rent, fail to move out at the end of the lease,
or violate a term of the lease. 68 P.S. § 250.501(a).
Landlord “self-help” eviction is prohibited – i.e., landlord may not change your locks or shut off
your utility service to initiate an eviction. Instead, the landlord must follow the process below.
Some cities in Pennsylvania, such as Philadelphia, may have eviction diversion or mediation
programs designed to help you and the landlord come to an agreement without using the court
process or creating an eviction record.
1. Notice to Quit
To begin eviction, the landlord must first give you a written eviction notice known as a Notice
to Quit.
• If the eviction is for failure to pay rent or for use of illegal drugs, the Notice to Quit must
give the tenant 10 days to leave voluntarily. 68 P.S. §§ 250.501(b), 250.505-A.
• If the eviction is for a breach of any other condition of the lease and the lease is for one
year or less (or an indeterminate time), the notice to quit must give the tenant 15 days
to leave voluntarily. 68 P.S. § 250.501(b).
• If the eviction is for a breach any other condition of the lease and the lease is for more
than one year, the notice to quit must give the tenant 30 days to leave voluntarily. 68
P.S. § 250.501(b).
The landlord must notify you of the notice to quit in one of three ways, 68 P.S. § 250.501(f):
• Give you the notice personally
• Leave the notice at the main building of the leased property
• Post the notice conspicuously on the leased property
Notice requirements may be and are often waived in the lease under a Waiver of Notice to
Quit provision. If the notice is validly waived, the landlord is permitted to take you to court (see
Landlord Tenant Complaint (Part V.C.2) below), without any advance notice. 68 P.S.
§ 250.501(e).
The Office of Attorney General encourages all landlords across the Commonwealth to deal fairly
with tenants by not including Waiver of Notice to Quit provisions in lease agreements and by
providing tenants with notice before beginning eviction proceedings.
Consumer Guide to Tenant and Landlord Rights
Version 1.1. Last updated June 13, 2022. 13
2. Landlord-Tenant Complaint
If you do not voluntarily leave the unit within the time period listed in the notice to quit, the
landlord still cannot evict you themselves. Instead, the landlord must file a legal action in court
often referred to as a Landlord-Tenant Complaint.
The complaint will be filed with your county’s Magisterial District Court, or in Philadelphia, the
Philadelphia Municipal Court, or in Allegheny County, the Housing Court. The complaint will
ask for possession of the unit and may also ask for back rent or damages. 246 Pa. Code § 503.
After the complaint is filed, the court will issue a summons to you, which is a copy of the
complaint and a notice to appear at a hearing on a specific date and time. 246 Pa. Code § 504.
The court will serve the summons by mailing a copy to you at your last known address by First-
Class Mail. 246 Pa. Code § 506. A sheriff or certified constable will also serve you with the
summons personally or by posting it conspicuously on the leased property. Id.
If you have any claims against the landlord for breach of the lease—for example, a breach of
the Implied Warranty of Habitability (Part IV.C)—you may file a counterclaim, but you must do
so before the date of the hearing. 246 Pa. Code § 508.
Each court will have slightly different rules for how it processes landlord-tenant complaints.
Make sure to check the local rules (sometimes called local civil rules or rules and procedures)
for the court where the complaint was filed to make sure you are aware of how the court
operates. For example, the Philadelphia Municipal Court will continue (i.e., postpone) the
hearing on a landlord-tenant complaint if the tenant has filed a complaint which has been
accepted by the Philadelphia Fair Housing Commission prior to the date the landlord filed the
complaint for eviction.
3. Hearing & Judgment
At the hearing, you and the landlord will each have an opportunity to present your case. 246
Pa. Code § 512. You may bring a lawyer to help you. You may also bring documents, photos,
emails, records, and other evidence to support your case.
It is extremely important that you do not miss your hearing date. If you miss or are late to
your hearing date, the landlord wins by default. If you cannot attend the hearing, contact the
court and ask if the hearing can be rescheduled.
At the end of the hearing or within three days, the judge will make a decision, called a Notice of
Judgment. 246 Pa. Code § 514(C).
• Judgment in favor of the landlord: If the judge rules in favor of the landlord, then the
Consumer Guide to Tenant and Landlord Rights
Version 1.1. Last updated June 13, 2022. 14
landlord will be granted possession of the unit. 246 Pa. Code § 514. The judge may also
require you to pay damages and unpaid rent. Id.
• Judgment in favor of the tenant: If the judge rules in your favor, then the landlord must
do what the judge orders them to do, such as allowing you to remain in the unit or
paying you money.
4. Appeal
You have 10 days after entry of judgement against you to file an appeal in the court of
common pleas. 68 P.S. § 250.513; 246 Pa. Code § 1008. The appeal will block an actual eviction
(in legal terminology, operate as a supersedeas) only if you deposit with the court either three
months’ rent or the amount the judge ordered you to pay, whichever is less. 246 Pa. Code
§ 1008(B). If you are low-income, you can file a tenant’s affidavit and deposit one third of your
monthly rent. 246 Pa. Code § 1008(C). In both situations, you will also have to deposit rent each
month while the appeal is pending. 246 Pa. Code § 1008(B), (C). The money will be held in
escrow by the court.
5. Order of Possession
Even if judgment is entered in favor of the landlord, the landlord still cannot evict you
themselves. Instead, the landlord must wait 10 days after entry of judgment and then ask the
court to issue an order of possession. 246 Pa. Code § 515(B).
The court will serve the order of possession by mailing a copy to you at your last known address
by First-Class Mail. 246 Pa. Code § 517. A sheriff or certified constable will also serve you with
the order of possession personally or by posting it conspicuously on the leased property. Id. The
order of possession will require you to vacate the residential unit within 10 days after the date
of service. 246 Pa. Code § 517(2).
If you remain in the rental unit on the 11th day following service of the order of possession,
then you can be forcibly evicted. 246 Pa. Code § 519(B).
If you are forcibly evicted and leave possessions behind, the landlord must notify you by First-
Class Mail of your right to retrieve the property. 68 P.S. § 250.505a(d), (e). You have 10 days
from the postmark date of the notice to either retrieve your possessions or ask that your
landlord store your possessions for up to 30 days. Id. If you ask your landlord to store your
possessions, you will be responsible for any costs. Id. If you do not contact the landlord or
retrieve your property, the landlord can dispose of it.
Consumer Guide to Tenant and Landlord Rights
Version 1.1. Last updated June 13, 2022. 15
6. Domestic Violence Survivor
If you are a victim of domestic violence,13 then you have 30 days to appeal a judgment in favor
of the landlord. 68 P.S. § 250.513(b). If the landlord obtains an order of possession before the
30 days have passed, you can file a domestic violence affidavit with the court to stay (i.e.,
freeze) the order of possession pending an appeal or until the end of the 30 days. 246 Pa. Code
§ 514.1.
7. Satisfaction of Judgment for Nonpayment of Rent
If the eviction is only for failure to pay rent, a tenant can stop the eviction by paying—any point
before actual eviction—the full amount of unpaid rent and other fees. 246 Pa. Code § 518.
Once that happens, one more step needs to be taken: the landlord should enter with the court
that the judgment has been satisfied. 246 Pa. Code § 341. If your landlord does not do so, you
should file a written request to have the judgment marked satisfied with the court and serve it
on the landlord. 246 Pa. Code § 341.
If the landlord does not enter that the judgment has been satisfied within 90 days of a written
request without good cause, the landlord will be liable to the tenant for 1% of the judgment
amount, at least $250 and up to $2,500, every month the judgment is not marked satisfied. 42
Pa. Cons. Stat. § 8104(b).
8. Eviction Records
An “eviction record” is an official record—court filings, transcripts and orders, for example—
that contains information about a past or ongoing lawsuit to evict a tenant.
There is no uniform requirement across all Pennsylvania courts for what details must be
included in an eviction record; there is no guarantee that the information contained in eviction
records tells the whole story.
An eviction record may also show, incorrectly, that a tenant who was evicted for failure to pay
rent has not satisfied the judgment just because the landlord has failed to have the judgment
marked satisfied with the court.
Pennsylvania does not automatically seal or expunge eviction records, even if the case is
withdrawn.
Some cities in Pennsylvania, such as Philadelphia, may have eviction diversion or mediation
programs designed to help you and the landlord come to an agreement without using the court
process or creating an eviction record. | Answer the prompt using only the provide text and nothing else. Your answer must be at least 4 sentences but no more than 6 sentences. Your answer must also be in paragraph format with no lists.
Consumer Guide to Tenant and Landlord Rights
Version 1.1. Last updated June 13, 2022. 11
a claim of housing discrimination based on a protected category.
V. ENDING A LEASE
A periodic lease (month-to-month or year-to-year) most often ends in one of three ways:
• You move out before the end of the term
• You or the landlord gives notice 15 days before the end of the term that the lease will
not renew
• You or the landlord materially breaches the lease
A lease for a fixed period of time most often ends in one of three ways:
• You move out before the end of the term
• The term of the lease expires and the parties do not agree to renew
• You or the landlord materially breaches the lease
A. Early Termination
There is no stand-alone right to terminate a lease early, and many lease agreements do not
allow a tenant to terminate early. If you voluntarily move out before the end of the lease, the
lease does not allow for early termination, and the landlord has not breached any of their
obligations, then you will likely be responsible for paying rent until the lease expires or until the
landlord rents the unit to a new tenant.
In Pennsylvania, the landlord has no obligation to locate a new tenant to rent the unit. If you
move out early, the landlord may be able to make you pay rent for the rest of the lease term.
B. Security Deposit Refund and Deductions
To have your security deposit refunded, you must provide the landlord with a forwarding
address and return the keys to the property. 68 P.S. § 250.512(e). Before leaving, clean the unit
as thoroughly as possible and take photos to document its condition.
Within 30 days after you have moved out, the landlord must either return the entire security
deposit or send you a list of damages, the cost of repairs, and any money remaining from the
security deposit. 68 P.S. § 250.512(a). Permissible deductions include “actual damages” to the
rental unit. Id.
If the landlord does not provide a written list of damages within 30 days, they may not keep any
part of the security deposit. 68 P.S. § 250.512(b). You may then sue to recover double the
amount of the deposit minus any actual damages as determined by a court. 68 P.S. §
250.512(c).
If, within 30 days, the landlord fails to pay you the difference between the security deposit and
the actual damages to the property, the landlord is liable for double the amount by which the
Consumer Guide to Tenant and Landlord Rights
Version 1.1. Last updated June 13, 2022. 12
security deposit exceeds the actual damages to the property. 68 P.S. § 250.512(b)-(c).
If you break the lease, the security deposit may be forfeited. 68 P.S. § 250.512(a).
C. Eviction
A landlord may seek to evict you if you fail to pay rent, fail to move out at the end of the lease,
or violate a term of the lease. 68 P.S. § 250.501(a).
Landlord “self-help” eviction is prohibited – i.e., landlord may not change your locks or shut off
your utility service to initiate an eviction. Instead, the landlord must follow the process below.
Some cities in Pennsylvania, such as Philadelphia, may have eviction diversion or mediation
programs designed to help you and the landlord come to an agreement without using the court
process or creating an eviction record.
1. Notice to Quit
To begin eviction, the landlord must first give you a written eviction notice known as a Notice
to Quit.
• If the eviction is for failure to pay rent or for use of illegal drugs, the Notice to Quit must
give the tenant 10 days to leave voluntarily. 68 P.S. §§ 250.501(b), 250.505-A.
• If the eviction is for a breach of any other condition of the lease and the lease is for one
year or less (or an indeterminate time), the notice to quit must give the tenant 15 days
to leave voluntarily. 68 P.S. § 250.501(b).
• If the eviction is for a breach any other condition of the lease and the lease is for more
than one year, the notice to quit must give the tenant 30 days to leave voluntarily. 68
P.S. § 250.501(b).
The landlord must notify you of the notice to quit in one of three ways, 68 P.S. § 250.501(f):
• Give you the notice personally
• Leave the notice at the main building of the leased property
• Post the notice conspicuously on the leased property
Notice requirements may be and are often waived in the lease under a Waiver of Notice to
Quit provision. If the notice is validly waived, the landlord is permitted to take you to court (see
Landlord Tenant Complaint (Part V.C.2) below), without any advance notice. 68 P.S.
§ 250.501(e).
The Office of Attorney General encourages all landlords across the Commonwealth to deal fairly
with tenants by not including Waiver of Notice to Quit provisions in lease agreements and by
providing tenants with notice before beginning eviction proceedings.
Consumer Guide to Tenant and Landlord Rights
Version 1.1. Last updated June 13, 2022. 13
2. Landlord-Tenant Complaint
If you do not voluntarily leave the unit within the time period listed in the notice to quit, the
landlord still cannot evict you themselves. Instead, the landlord must file a legal action in court
often referred to as a Landlord-Tenant Complaint.
The complaint will be filed with your county’s Magisterial District Court, or in Philadelphia, the
Philadelphia Municipal Court, or in Allegheny County, the Housing Court. The complaint will
ask for possession of the unit and may also ask for back rent or damages. 246 Pa. Code § 503.
After the complaint is filed, the court will issue a summons to you, which is a copy of the
complaint and a notice to appear at a hearing on a specific date and time. 246 Pa. Code § 504.
The court will serve the summons by mailing a copy to you at your last known address by First-
Class Mail. 246 Pa. Code § 506. A sheriff or certified constable will also serve you with the
summons personally or by posting it conspicuously on the leased property. Id.
If you have any claims against the landlord for breach of the lease—for example, a breach of
the Implied Warranty of Habitability (Part IV.C)—you may file a counterclaim, but you must do
so before the date of the hearing. 246 Pa. Code § 508.
Each court will have slightly different rules for how it processes landlord-tenant complaints.
Make sure to check the local rules (sometimes called local civil rules or rules and procedures)
for the court where the complaint was filed to make sure you are aware of how the court
operates. For example, the Philadelphia Municipal Court will continue (i.e., postpone) the
hearing on a landlord-tenant complaint if the tenant has filed a complaint which has been
accepted by the Philadelphia Fair Housing Commission prior to the date the landlord filed the
complaint for eviction.
3. Hearing & Judgment
At the hearing, you and the landlord will each have an opportunity to present your case. 246
Pa. Code § 512. You may bring a lawyer to help you. You may also bring documents, photos,
emails, records, and other evidence to support your case.
It is extremely important that you do not miss your hearing date. If you miss or are late to
your hearing date, the landlord wins by default. If you cannot attend the hearing, contact the
court and ask if the hearing can be rescheduled.
At the end of the hearing or within three days, the judge will make a decision, called a Notice of
Judgment. 246 Pa. Code § 514(C).
• Judgment in favor of the landlord: If the judge rules in favor of the landlord, then the
Consumer Guide to Tenant and Landlord Rights
Version 1.1. Last updated June 13, 2022. 14
landlord will be granted possession of the unit. 246 Pa. Code § 514. The judge may also
require you to pay damages and unpaid rent. Id.
• Judgment in favor of the tenant: If the judge rules in your favor, then the landlord must
do what the judge orders them to do, such as allowing you to remain in the unit or
paying you money.
4. Appeal
You have 10 days after entry of judgement against you to file an appeal in the court of
common pleas. 68 P.S. § 250.513; 246 Pa. Code § 1008. The appeal will block an actual eviction
(in legal terminology, operate as a supersedeas) only if you deposit with the court either three
months’ rent or the amount the judge ordered you to pay, whichever is less. 246 Pa. Code
§ 1008(B). If you are low-income, you can file a tenant’s affidavit and deposit one third of your
monthly rent. 246 Pa. Code § 1008(C). In both situations, you will also have to deposit rent each
month while the appeal is pending. 246 Pa. Code § 1008(B), (C). The money will be held in
escrow by the court.
5. Order of Possession
Even if judgment is entered in favor of the landlord, the landlord still cannot evict you
themselves. Instead, the landlord must wait 10 days after entry of judgment and then ask the
court to issue an order of possession. 246 Pa. Code § 515(B).
The court will serve the order of possession by mailing a copy to you at your last known address
by First-Class Mail. 246 Pa. Code § 517. A sheriff or certified constable will also serve you with
the order of possession personally or by posting it conspicuously on the leased property. Id. The
order of possession will require you to vacate the residential unit within 10 days after the date
of service. 246 Pa. Code § 517(2).
If you remain in the rental unit on the 11th day following service of the order of possession,
then you can be forcibly evicted. 246 Pa. Code § 519(B).
If you are forcibly evicted and leave possessions behind, the landlord must notify you by First-
Class Mail of your right to retrieve the property. 68 P.S. § 250.505a(d), (e). You have 10 days
from the postmark date of the notice to either retrieve your possessions or ask that your
landlord store your possessions for up to 30 days. Id. If you ask your landlord to store your
possessions, you will be responsible for any costs. Id. If you do not contact the landlord or
retrieve your property, the landlord can dispose of it.
Consumer Guide to Tenant and Landlord Rights
Version 1.1. Last updated June 13, 2022. 15
6. Domestic Violence Survivor
If you are a victim of domestic violence,13 then you have 30 days to appeal a judgment in favor
of the landlord. 68 P.S. § 250.513(b). If the landlord obtains an order of possession before the
30 days have passed, you can file a domestic violence affidavit with the court to stay (i.e.,
freeze) the order of possession pending an appeal or until the end of the 30 days. 246 Pa. Code
§ 514.1.
7. Satisfaction of Judgment for Nonpayment of Rent
If the eviction is only for failure to pay rent, a tenant can stop the eviction by paying—any point
before actual eviction—the full amount of unpaid rent and other fees. 246 Pa. Code § 518.
Once that happens, one more step needs to be taken: the landlord should enter with the court
that the judgment has been satisfied. 246 Pa. Code § 341. If your landlord does not do so, you
should file a written request to have the judgment marked satisfied with the court and serve it
on the landlord. 246 Pa. Code § 341.
If the landlord does not enter that the judgment has been satisfied within 90 days of a written
request without good cause, the landlord will be liable to the tenant for 1% of the judgment
amount, at least $250 and up to $2,500, every month the judgment is not marked satisfied. 42
Pa. Cons. Stat. § 8104(b).
8. Eviction Records
An “eviction record” is an official record—court filings, transcripts and orders, for example—
that contains information about a past or ongoing lawsuit to evict a tenant.
There is no uniform requirement across all Pennsylvania courts for what details must be
included in an eviction record; there is no guarantee that the information contained in eviction
records tells the whole story.
An eviction record may also show, incorrectly, that a tenant who was evicted for failure to pay
rent has not satisfied the judgment just because the landlord has failed to have the judgment
marked satisfied with the court.
Pennsylvania does not automatically seal or expunge eviction records, even if the case is
withdrawn.
Some cities in Pennsylvania, such as Philadelphia, may have eviction diversion or mediation
programs designed to help you and the landlord come to an agreement without using the court
process or creating an eviction record.
I have a year and a month left on my lease and my landlord is going to evict me even though I've paid my rent, how long do I have to leave after they actually start the process? |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | List and summarize the established exceptions to the general rule that employees are not protected by FECA when injured while traveling between home and work. Answer should not exceed 150 words. | U.S. Department of Labor
Office of Workers’ Compensation Programs
Procedure Manual
Division of Federal Employees' Compensation (DFEC)
FECA Part 2
6. To and From Work. Employees do not generally have the protection of the FECA when injured while en route between work and home.
a. Exceptions. There are five well-established exceptions to this general rule. These exceptions are:
(1) Where the employment requires the employee to travel;
(2) Where the employer contracts for and furnishes transportation to and from work;
(3) Where the employee is subject to emergency duty, as in the case of firefighters;
(4) Where the employee uses the highway or public transportation to do something incidental to employment with the knowledge and approval of the employer; and
(5) Where the employee is required to travel during a curfew established by local, municipal, county or state authorities because of civil disturbances or for other reasons.
b. Where the Employment Requires the Employee to Travel. This situation will not occur in the case of an employee having a fixed place of employment unless on an errand or special mission. It usually involves an employee who performs all or most of the work away from the industrial premises, such as a chauffeur, truck driver, or messenger. In cases of this type the official superior should be requested to submit a supplemental statement fully describing the employee's assigned duties and showing how and in what manner the work required the employee to travel, whether on the highway or by public transportation. In injury cases a similar statement should be obtained from the injured employee.
c. Where the Employer Contracts for and Furnishes Transportation to and from Work. Where this expectation is claimed, the official superior should be requested to submit a supplemental statement showing, with appropriate explanation, whether the employee's transportation was furnished or otherwise provided by contract by contract by the employer. In injury cases a similar statement should be obtained from the injured employee. Also see Program Memorandum 104 dated October 24, 1969.
The Safe, Accountable, Flexible, Efficient Transportation Equity Act of 2005 (Public Law 109-59) amends Title 31, Section 1344 of the U.S. Code to allow Federal agencies in the National Capitol Region to pay for the costs of shuttle buses or other means of transportation between the place of employment and mass transit facilities. The bill statues that for "purpose of any determination under chapter 81 of title 5 ... an individual shall not be considered to be 'in the performance of duty' or 'acting within the scope of his or her employment' by virtue of the fact that such individual is receiving transportation services" under this legislation.
IF it is determined that a shuttle bus or other means of transportation to and from mass transit is authorized under this statue, then the injury is not considered to have occurred within the performance of duty. When requesting information from the agency about the employer-provided conveyance, the agency should be asked whether the service in question was provided pursuant to the above statutory authority.
d. Where the Employee is Subject to Emergency Duty.
(1) When it is alleged that the employee was subject to emergency duty, the official superior should be requested to submit:
(a) A copy of the injured employee's official position description, or other document showing that as the occasion arose, the duties did in fact require the performance of emergency duty; and
(b) A specific statement showing that at the time of the injury the employee was in fact traveling to or from work because of emergency duty.
(2) In disability cases, a statement from the injured employee should be requested showing whether at the time of the injury the employee was in fact going to or from work because of emergency duty.
e. Where the Employee Uses the Highway or Public Transportation to Perform a Service for the Employer.
(1) Where this exception is claimed, the official superior should be requested to submit a statement showing:
(a) The precise duty the employee had performed or was expected to perform for the employer during the trip in question; and
(b) Whether this was being done upon directions of the employer and, if not, whether the employer had prior knowledge of and had previously approved the employee's activity.
(2) In disability cases the injured employee should be requested to submit a similar statement.
f. Travel During a Curfew.
(1) When it has been determined that the employee was required to travel during a curfew established by local, municipal, county or state authorities because of civil disturbances or for other reasons, the official superior should be requested to submit:
(a) The reason the employee was requested to report for duty;
(b) Whether other employees were given administrative leave because of the curfew; and
(c) Whether the injury resulted from a specific hazard caused by the imposition of the curfew, such as an attack by rioting citizens.
(2) In disability cases the injured employee should be requested to submit a similar statement.
(3) When all the facts are developed, the case should be referred to the National Office. | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
List and summarize the established exceptions to the general rule that employees are not protected by FECA when injured while traveling between home and work. Answer should not exceed 150 words.
{passage 0}
==========
U.S. Department of Labor
Office of Workers’ Compensation Programs
Procedure Manual
Division of Federal Employees' Compensation (DFEC)
FECA Part 2
6. To and From Work. Employees do not generally have the protection of the FECA when injured while en route between work and home.
a. Exceptions. There are five well-established exceptions to this general rule. These exceptions are:
(1) Where the employment requires the employee to travel;
(2) Where the employer contracts for and furnishes transportation to and from work;
(3) Where the employee is subject to emergency duty, as in the case of firefighters;
(4) Where the employee uses the highway or public transportation to do something incidental to employment with the knowledge and approval of the employer; and
(5) Where the employee is required to travel during a curfew established by local, municipal, county or state authorities because of civil disturbances or for other reasons.
b. Where the Employment Requires the Employee to Travel. This situation will not occur in the case of an employee having a fixed place of employment unless on an errand or special mission. It usually involves an employee who performs all or most of the work away from the industrial premises, such as a chauffeur, truck driver, or messenger. In cases of this type the official superior should be requested to submit a supplemental statement fully describing the employee's assigned duties and showing how and in what manner the work required the employee to travel, whether on the highway or by public transportation. In injury cases a similar statement should be obtained from the injured employee.
c. Where the Employer Contracts for and Furnishes Transportation to and from Work. Where this expectation is claimed, the official superior should be requested to submit a supplemental statement showing, with appropriate explanation, whether the employee's transportation was furnished or otherwise provided by contract by contract by the employer. In injury cases a similar statement should be obtained from the injured employee. Also see Program Memorandum 104 dated October 24, 1969.
The Safe, Accountable, Flexible, Efficient Transportation Equity Act of 2005 (Public Law 109-59) amends Title 31, Section 1344 of the U.S. Code to allow Federal agencies in the National Capitol Region to pay for the costs of shuttle buses or other means of transportation between the place of employment and mass transit facilities. The bill statues that for "purpose of any determination under chapter 81 of title 5 ... an individual shall not be considered to be 'in the performance of duty' or 'acting within the scope of his or her employment' by virtue of the fact that such individual is receiving transportation services" under this legislation.
IF it is determined that a shuttle bus or other means of transportation to and from mass transit is authorized under this statue, then the injury is not considered to have occurred within the performance of duty. When requesting information from the agency about the employer-provided conveyance, the agency should be asked whether the service in question was provided pursuant to the above statutory authority.
d. Where the Employee is Subject to Emergency Duty.
(1) When it is alleged that the employee was subject to emergency duty, the official superior should be requested to submit:
(a) A copy of the injured employee's official position description, or other document showing that as the occasion arose, the duties did in fact require the performance of emergency duty; and
(b) A specific statement showing that at the time of the injury the employee was in fact traveling to or from work because of emergency duty.
(2) In disability cases, a statement from the injured employee should be requested showing whether at the time of the injury the employee was in fact going to or from work because of emergency duty.
e. Where the Employee Uses the Highway or Public Transportation to Perform a Service for the Employer.
(1) Where this exception is claimed, the official superior should be requested to submit a statement showing:
(a) The precise duty the employee had performed or was expected to perform for the employer during the trip in question; and
(b) Whether this was being done upon directions of the employer and, if not, whether the employer had prior knowledge of and had previously approved the employee's activity.
(2) In disability cases the injured employee should be requested to submit a similar statement.
f. Travel During a Curfew.
(1) When it has been determined that the employee was required to travel during a curfew established by local, municipal, county or state authorities because of civil disturbances or for other reasons, the official superior should be requested to submit:
(a) The reason the employee was requested to report for duty;
(b) Whether other employees were given administrative leave because of the curfew; and
(c) Whether the injury resulted from a specific hazard caused by the imposition of the curfew, such as an attack by rioting citizens.
(2) In disability cases the injured employee should be requested to submit a similar statement.
(3) When all the facts are developed, the case should be referred to the National Office.
https://www.dol.gov/agencies/owcp/FECA/regs/compliance/DFECfolio/FECA-PT2/group1#20805 |
Address the user's request with the information contained within the provided text - Do not draw on external resources or from your base knowledge. | Please present a comprehensive history of polio and its vaccines in bullet-point format. | 1.2 History of poliomyelitis and polio vaccines
Poliomyelitis is a disease of great antiquity. Perhaps the earliest description is
evident in an Egyptian stele from around 1350 BC depicting a young man with
typical asymmetric flaccid paralysis and atrophy of the leg. Several scattered reports
of the disease also appear in the literature from the 17th and 18th century. By the
mid-19th century, the Industrial Revolution had brought increased urbanization to
Europe and North America and, with it, significant changes and improvements in
living conditions. Coincident with these massive changes was the advent of larger
and more frequent outbreaks of poliomyelitis. From the late 1800s, outbreaks were
occurring in several European countries and in the United States, and they remained
a dominant public health problem in the developed world for the first half of the
20th century.
A major landmark in the study of poliomyelitis was the successful passage of the
virus to nonhuman primates by Landsteiner and Popper in 1909. The availability of
animal models provided the first opportunity to study the disease outside of human
patients and produced important information on the process of infection and the
pathophysiology of the disease. Further studies on the infectious agent awaited the
crucial development by Enders, Weller, and Robbins in 1949 of tissue culture
systems for in vitro propagation of the virus. This advance, and the recognition of
three distinct serotypes, opened the way for all subsequent work on vaccines and
study of the biochemical and biophysical properties of the polioviruses.
By the 1950s, two different approaches to the prevention of poliomyelitis by
vaccination were developed. Salk and Younger produced the first successful polio
vaccine in 1954 by chemical inactivation of tissue culture-propagated virus using
formaldehyde. This vaccine was completely non-infectious, yet, following injection,
it elicited an immune response that was protective against paralytic disease. During
the same period, many laboratories sought to produce live, attenuated polio
vaccines. The OPV strains of Sabin were licensed in 1961 following extensive field
trials in the former Soviet Union, Eastern Europe and Latin America. Mass
immunization campaigns in many countries began in 1962 and 1963. Both the
inactivated polio vaccine (IPV) and OPV contain three components, one for each
immunologically distinct serotype of poliovirus. Some countries use enhanced IPV
(eIPV) that contains higher D-antigenic units per dose for types 2 and 3 than
standard IPV. Widespread immunization with IPV, and since 1963 with OPV, has
virtually eliminated poliomyelitis in most developed countries.
1.3 Characterization of the pathogen
The polioviruses belong to the genus Enterovirus in the family Picornaviridae. All
are small, round 30 nm particles with icosahedral symmetry, and they contain no
essential lipid envelope. Polioviruses share most of their biochemical and
biophysical characteristics with the other enteroviruses and are different from some
of the other picornaviruses. The viral particles have a buoyant density of 1.34 g/ml
in caesium chloride and a sedimentation coefficient of approximately 156S. The
infectious particles are relatively heat resistant (when stabilized by magnesium
cations), resistant to acid pH (pH 3 to 5 for one to three hours), and also resistant to
WHO/IVB/04.10 5
many common detergents and disinfectants, including common soap, non-ionic
detergents, ether, chloroform, and other lipid solvents. The virus is stable for weeks
at 4°C and for days at room temperature. Drying, ultraviolet light, high heat,
formaldehyde, and free chlorine, however, readily inactivate the virus.
Polioviruses and the enteroviruses are distinguished from the other picornaviruses
on the basis of physical properties such as buoyant density in caesium chloride and
stability in weak acid. The three poliovirus serotypes are distinguished from the
other enteroviruses by neutralization with serotype-specific antisera and the
propensity to cause paralytic illness. The Mahoney strain of type 1 poliovirus is the
prototype for the polioviruses, the genus enterovirus, and the family Picornaviridae.
It is among the most-studied and best-characterized agents of human disease.
The poliovirus consists of 60 copies each of four polypeptide chains that form a
very highly structured shell. Located inside this shell, the viral genome consists of a
single molecule of ribonucleic acid (RNA), which is about 7500 nucleotides long.
The four capsid polypeptides are produced by the proteolytic cleavage of a single
polyprotein precursor, and are designated VP1 through VP4. Attached covalently to
the amino-terminal of the VP4 protein is a single molecule of myristilate. In
addition, one small protein, VPg, is covalently attached to the 5'-end of the viral
RNA. A major advance in studies on the structure of polioviruses occurred with the
solution of the crystal structure to a resolution of 0.29 nm. From the
three-dimensional structure of the poliovirus, VP1 contributes the majority of the
amino acid residues on the virus surface, VP2 and VP3 are partially exposed on the
surface, and VP4 is completely internal.
The information concerning the surface of the virus has been particularly useful in
understanding the neutralization of poliovirus by antibodies. Studies with
monoclonal neutralizing antibodies and mutant viruses resistant to them have
revealed four main antigenic sites on the virus. The relative importance of individual
sites is different for each of the three serotypes of poliovirus. The X-ray crystal
structure has confirmed that the antigenic sites are composed of amino acid residues
located on the virus surface and exposed loops of capsid proteins. Adjacent domains
of the same and other capsid proteins influence the conformation of the loops. This
explains why antigenicity of the virus is destroyed by disruption of the virus
structure. In addition, there are other antigenic sites that elicit an immune response
that is not neutralizing.
The poliovirus-neutralizing antibody response is serotype-specific, with the
exception of some minor cross-reaction between poliovirus 1 and 2. Heat-disrupted
viruses, particularly those heated in the presence of detergent, induce antibodies that
react with many enteroviruses. These broadly reacting antibodies are generally not
neutralizing. Antisera raised in animals to each of the viruses are largely
type-specific and are used for the determination of serotype in a neutralization assay.
Although more than one T-cell epitope has been described in both structural and
non-structural viral proteins, the role of cell-mediated immunity in controlling
infection has not been determined.
6 Polio laboratory manual
Polioviruses are among the simplest viruses in terms of genetic complexity and size.
The RNA genomes from all three serotypes of poliovirus have been cloned and
sequenced. The genomic RNA is infectious and serves as messenger RNA for viral
protein synthesis. The RNA is translated in a single open reading frame into one
large polyprotein, which is then processed through proteolytic cleavage by two
distinct virus-encoded proteases into the functional viral proteins (Figure 1.2).
Despite much research and the simple nature of the virus, several steps of the virus
growth cycle have remained elusive, including the site and mode of virus entry and
release of the genome into the cytoplasm. Polioviruses initially bind to a specific
plasma membrane protein, the poliovirus receptor (PVR; CD155), a member of the
immunoglobulin superfamily of proteins. The binding to the receptor triggers
conformational changes in the capsid structure that are necessary for the release of
the genome into the cytoplasm (uncoating). No other picornaviruses use this
protein as their cellular receptor, a fact that has been exploited in the eradication
programme by the use of a recombinant murine cell line expressing the human PVR
to selectively isolate polioviruses.
Once the viral genome has entered the cell, the replication cycle begins when the
viral RNA is transcribed by the viral polymerase beginning at the 3'-end of the
infecting viral RNA to generate a complementary RNA (cRNA). In the next step,
which is dependent on a “host factor”, the progeny viral RNA is synthesized from
the cRNA. The newly synthesized viral RNA is covalently linked to the VPg
protein at the 5'- end of the RNA, and then only the positive sense strand of RNA
is encapsidated in the viral structural proteins to form infectious viral particles. The
extensive studies into virus replication and assembly have resulted in the remarkable
accomplishment of complete cell-free replication of poliovirus beginning only with
the viral RNA. | Address the user's request with the information contained within the provided text - Do not draw on external resources or from your base knowledge.
1.2 History of poliomyelitis and polio vaccines
Poliomyelitis is a disease of great antiquity. Perhaps the earliest description is
evident in an Egyptian stele from around 1350 BC depicting a young man with
typical asymmetric flaccid paralysis and atrophy of the leg. Several scattered reports
of the disease also appear in the literature from the 17th and 18th century. By the
mid-19th century, the Industrial Revolution had brought increased urbanization to
Europe and North America and, with it, significant changes and improvements in
living conditions. Coincident with these massive changes was the advent of larger
and more frequent outbreaks of poliomyelitis. From the late 1800s, outbreaks were
occurring in several European countries and in the United States, and they remained
a dominant public health problem in the developed world for the first half of the
20th century.
A major landmark in the study of poliomyelitis was the successful passage of the
virus to nonhuman primates by Landsteiner and Popper in 1909. The availability of
animal models provided the first opportunity to study the disease outside of human
patients and produced important information on the process of infection and the
pathophysiology of the disease. Further studies on the infectious agent awaited the
crucial development by Enders, Weller, and Robbins in 1949 of tissue culture
systems for in vitro propagation of the virus. This advance, and the recognition of
three distinct serotypes, opened the way for all subsequent work on vaccines and
study of the biochemical and biophysical properties of the polioviruses.
By the 1950s, two different approaches to the prevention of poliomyelitis by
vaccination were developed. Salk and Younger produced the first successful polio
vaccine in 1954 by chemical inactivation of tissue culture-propagated virus using
formaldehyde. This vaccine was completely non-infectious, yet, following injection,
it elicited an immune response that was protective against paralytic disease. During
the same period, many laboratories sought to produce live, attenuated polio
vaccines. The OPV strains of Sabin were licensed in 1961 following extensive field
trials in the former Soviet Union, Eastern Europe and Latin America. Mass
immunization campaigns in many countries began in 1962 and 1963. Both the
inactivated polio vaccine (IPV) and OPV contain three components, one for each
immunologically distinct serotype of poliovirus. Some countries use enhanced IPV
(eIPV) that contains higher D-antigenic units per dose for types 2 and 3 than
standard IPV. Widespread immunization with IPV, and since 1963 with OPV, has
virtually eliminated poliomyelitis in most developed countries.
1.3 Characterization of the pathogen
The polioviruses belong to the genus Enterovirus in the family Picornaviridae. All
are small, round 30 nm particles with icosahedral symmetry, and they contain no
essential lipid envelope. Polioviruses share most of their biochemical and
biophysical characteristics with the other enteroviruses and are different from some
of the other picornaviruses. The viral particles have a buoyant density of 1.34 g/ml
in caesium chloride and a sedimentation coefficient of approximately 156S. The
infectious particles are relatively heat resistant (when stabilized by magnesium
cations), resistant to acid pH (pH 3 to 5 for one to three hours), and also resistant to
WHO/IVB/04.10 5
many common detergents and disinfectants, including common soap, non-ionic
detergents, ether, chloroform, and other lipid solvents. The virus is stable for weeks
at 4°C and for days at room temperature. Drying, ultraviolet light, high heat,
formaldehyde, and free chlorine, however, readily inactivate the virus.
Polioviruses and the enteroviruses are distinguished from the other picornaviruses
on the basis of physical properties such as buoyant density in caesium chloride and
stability in weak acid. The three poliovirus serotypes are distinguished from the
other enteroviruses by neutralization with serotype-specific antisera and the
propensity to cause paralytic illness. The Mahoney strain of type 1 poliovirus is the
prototype for the polioviruses, the genus enterovirus, and the family Picornaviridae.
It is among the most-studied and best-characterized agents of human disease.
The poliovirus consists of 60 copies each of four polypeptide chains that form a
very highly structured shell. Located inside this shell, the viral genome consists of a
single molecule of ribonucleic acid (RNA), which is about 7500 nucleotides long.
The four capsid polypeptides are produced by the proteolytic cleavage of a single
polyprotein precursor, and are designated VP1 through VP4. Attached covalently to
the amino-terminal of the VP4 protein is a single molecule of myristilate. In
addition, one small protein, VPg, is covalently attached to the 5'-end of the viral
RNA. A major advance in studies on the structure of polioviruses occurred with the
solution of the crystal structure to a resolution of 0.29 nm. From the
three-dimensional structure of the poliovirus, VP1 contributes the majority of the
amino acid residues on the virus surface, VP2 and VP3 are partially exposed on the
surface, and VP4 is completely internal.
The information concerning the surface of the virus has been particularly useful in
understanding the neutralization of poliovirus by antibodies. Studies with
monoclonal neutralizing antibodies and mutant viruses resistant to them have
revealed four main antigenic sites on the virus. The relative importance of individual
sites is different for each of the three serotypes of poliovirus. The X-ray crystal
structure has confirmed that the antigenic sites are composed of amino acid residues
located on the virus surface and exposed loops of capsid proteins. Adjacent domains
of the same and other capsid proteins influence the conformation of the loops. This
explains why antigenicity of the virus is destroyed by disruption of the virus
structure. In addition, there are other antigenic sites that elicit an immune response
that is not neutralizing.
The poliovirus-neutralizing antibody response is serotype-specific, with the
exception of some minor cross-reaction between poliovirus 1 and 2. Heat-disrupted
viruses, particularly those heated in the presence of detergent, induce antibodies that
react with many enteroviruses. These broadly reacting antibodies are generally not
neutralizing. Antisera raised in animals to each of the viruses are largely
type-specific and are used for the determination of serotype in a neutralization assay.
Although more than one T-cell epitope has been described in both structural and
non-structural viral proteins, the role of cell-mediated immunity in controlling
infection has not been determined.
6 Polio laboratory manual
Polioviruses are among the simplest viruses in terms of genetic complexity and size.
The RNA genomes from all three serotypes of poliovirus have been cloned and
sequenced. The genomic RNA is infectious and serves as messenger RNA for viral
protein synthesis. The RNA is translated in a single open reading frame into one
large polyprotein, which is then processed through proteolytic cleavage by two
distinct virus-encoded proteases into the functional viral proteins (Figure 1.2).
Despite much research and the simple nature of the virus, several steps of the virus
growth cycle have remained elusive, including the site and mode of virus entry and
release of the genome into the cytoplasm. Polioviruses initially bind to a specific
plasma membrane protein, the poliovirus receptor (PVR; CD155), a member of the
immunoglobulin superfamily of proteins. The binding to the receptor triggers
conformational changes in the capsid structure that are necessary for the release of
the genome into the cytoplasm (uncoating). No other picornaviruses use this
protein as their cellular receptor, a fact that has been exploited in the eradication
programme by the use of a recombinant murine cell line expressing the human PVR
to selectively isolate polioviruses.
Once the viral genome has entered the cell, the replication cycle begins when the
viral RNA is transcribed by the viral polymerase beginning at the 3'-end of the
infecting viral RNA to generate a complementary RNA (cRNA). In the next step,
which is dependent on a “host factor”, the progeny viral RNA is synthesized from
the cRNA. The newly synthesized viral RNA is covalently linked to the VPg
protein at the 5'- end of the RNA, and then only the positive sense strand of RNA
is encapsidated in the viral structural proteins to form infectious viral particles. The
extensive studies into virus replication and assembly have resulted in the remarkable
accomplishment of complete cell-free replication of poliovirus beginning only with
the viral RNA.
Please present a comprehensive history of polio and its vaccines in bullet-point format. |
You are an assistant for a marketing team. You analyze articles about companies and products and answer any questions the marketing team may have. Do not use any sources of external information in your answer. Base your answer solely on the article provided. | Is Michael Jordan mentioned in this article? | Our NIKE Brand product offerings are aligned around our consumer construct focused on Men's, Women's and Kids'. We also design products specifically for the Jordan Brand and Converse. We believe this approach allows us to create products that better meet individual consumer needs while accelerating our largest growth opportunities.
NIKE's athletic footwear products are designed primarily for specific athletic use, although a large percentage of the products are worn for casual or leisure purposes. We place considerable emphasis on innovation and high-quality construction in the development and manufacturing of our products. Our Men's, Women's and Jordan Brand footwear products currently lead in footwear sales and we expect them to continue to do so.
We also sell sports apparel, which features the same trademarks and are sold predominantly through the same marketing and distribution channels as athletic footwear. Our sports apparel, similar to our athletic footwear products, is designed primarily for
athletic use, although many of the products are worn for casual or leisure purposes, and demonstrates our commitment to innovation and high-quality construction. Our Men's and Women's apparel products currently lead in apparel sales and we expect
them to continue to do so. We often market footwear, apparel and accessories in "collections" of similar use or by category. We also market apparel with licensed college and professional team and league logos.
We sell a line of performance equipment and accessories under the NIKE Brand name, including bags, socks, sport balls, eyewear, timepieces, digital devices, bats, gloves, protective equipment and other equipment designed for sports activities. We also sell small amounts of various plastic products to other manufacturers through our wholly-owned subsidiary, NIKE IHM, Inc., doing business as Air Manufacturing Innovation.
Our Jordan Brand designs, distributes and licenses athletic and casual footwear, apparel and accessories predominantly focused on basketball performance and culture using the Jumpman trademark. Sales and operating results for Jordan Brand products are reported within the respective NIKE Brand geographic operating segments.
Our wholly-owned subsidiary brand, Converse, headquartered in Boston, Massachusetts, designs, distributes and licenses casual sneakers, apparel and accessories under the Converse, Chuck Taylor, All Star, One Star, Star Chevron and Jack Purcell trademarks. Operating results of the Converse brand are reported on a stand-alone basis.
In addition to the products we sell to our wholesale customers and directly to consumers through our NIKE Direct operations, we have also entered into license agreements that permit unaffiliated parties to manufacture and sell, using NIKE-owned trademarks,
certain apparel, digital devices and applications and other equipment designed for sports activities. | You are an assistant for a marketing team. You analyze articles about companies and products and answer any questions the marketing team may have. Do not use any sources of external information in your answer. Base your answer solely on the article provided.
Question: Is Michael Jordan mentioned in this article?
Article:
"Our NIKE Brand product offerings are aligned around our consumer construct focused on Men's, Women's and Kids'. We also design products specifically for the Jordan Brand and Converse. We believe this approach allows us to create products that better meet individual consumer needs while accelerating our largest growth opportunities.
NIKE's athletic footwear products are designed primarily for specific athletic use, although a large percentage of the products are worn for casual or leisure purposes. We place considerable emphasis on innovation and high-quality construction in the development and manufacturing of our products. Our Men's, Women's and Jordan Brand footwear products currently lead in footwear sales and we expect them to continue to do so.
We also sell sports apparel, which features the same trademarks and are sold predominantly through the same marketing and distribution channels as athletic footwear. Our sports apparel, similar to our athletic footwear products, is designed primarily for
athletic use, although many of the products are worn for casual or leisure purposes, and demonstrates our commitment to innovation and high-quality construction. Our Men's and Women's apparel products currently lead in apparel sales and we expect
them to continue to do so. We often market footwear, apparel and accessories in "collections" of similar use or by category. We also market apparel with licensed college and professional team and league logos.
We sell a line of performance equipment and accessories under the NIKE Brand name, including bags, socks, sport balls, eyewear, timepieces, digital devices, bats, gloves, protective equipment and other equipment designed for sports activities. We also sell small amounts of various plastic products to other manufacturers through our wholly-owned subsidiary, NIKE IHM, Inc., doing business as Air Manufacturing Innovation.
Our Jordan Brand designs, distributes and licenses athletic and casual footwear, apparel and accessories predominantly focused on basketball performance and culture using the Jumpman trademark. Sales and operating results for Jordan Brand products are reported within the respective NIKE Brand geographic operating segments.
Our wholly-owned subsidiary brand, Converse, headquartered in Boston, Massachusetts, designs, distributes and licenses casual sneakers, apparel and accessories under the Converse, Chuck Taylor, All Star, One Star, Star Chevron and Jack Purcell trademarks. Operating results of the Converse brand are reported on a stand-alone basis.
In addition to the products we sell to our wholesale customers and directly to consumers through our NIKE Direct operations, we have also entered into license agreements that permit unaffiliated parties to manufacture and sell, using NIKE-owned trademarks,
certain apparel, digital devices and applications and other equipment designed for sports activities." |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | lately I have been trying to learn more about my uncles heart disease issues. Can you tell me what the article says about hypertension? Make it 380-400 words without referencing what it is caused by, or vitamin D | In recent years, researchers and public health programs and practices have focused on preventing, managing, and controlling traditional CVD risk factors by instituting timely intervention programs, identifying social determinants of health (SDOH), examining disparities in CVD risks, assessing the COVID-19 pandemic’s impact on CVD risks, and implementing collective efforts through community-based approaches to achieve population-level improvements in cardiovascular health. This special PCD collection of 20 articles published from January 2020 through November 2022 highlights some of these efforts by using multiple data sources collected before or during the pandemic. For instance, cigarette smoking and risk-enhancing factors related to pregnancy have been shown to increase CVD risks with significant implications (eg, increased infant mortality). Disparities in hypertension, stroke, and stroke mortality exist, exhibiting significant sociodemographic (eg, racial) and geographic (eg, rural–urban, county, zip code) variations. Intervention programs, such as behavioral modifications strengthening chronic disease awareness, use of self-measured blood pressure monitoring, and sodium intake reduction, are evaluated. The impact of COVID-19 on CVD is also explored. Finally, systematic reviews and meta-analyses evaluated the associations of circulating vitamin D levels, vitamin D supplementation, or high-density lipoprotein cholesterol (HDL-C) with blood pressure or stroke. These 20 articles advance our understanding of effective CVD risk management and intervention programs in multiple settings — in the general population and among high-risk groups — with a health equity lens across 3 broad themes further explored in this essay:
Examining factors contributing to CVD risk
Exploring factors contributing to disparities in CVD
Using community-based approaches to decrease CVDExamining the Factors Contributing to CVD Risk
The greatest contributors to CVD-related years of life lost globally are tobacco exposure, hypertension, high body mass index (BMI), and high fasting plasma glucose (3). Tobacco exposure, including cigarette smoking, secondhand smoke, and use of smokeless tobacco, contributed to 8.7 million deaths worldwide in 2019, one-third of which were due to CVD (3). Hypertension affects more than 4 billion people worldwide, representing a near doubling in the absolute prevalence of hypertension since 1990 (3). In the US, nearly half of adults (47%) have hypertension, but only about 1 in 4 (24%) have their condition under control (7). Elevated BMI continues to increase globally, with significant effects on death, disability, and quality of life (3). The prevalence of obesity has increased worldwide in the past 50 years, reaching pandemic levels. Obesity represents a major health challenge because it substantially increases the risk of diseases such as hypertension, myocardial infarction, stroke, type 2 diabetes, and dementia, thereby contributing to a decline in both quality of life and life expectancy (8). Furthermore, global increases in high fasting plasma glucose and its sequelae, type 2 diabetes, have mirrored the increases seen in BMI over the past 3 decades (9). Other behavioral risks (eg, unhealthy diet, physical inactivity, inadequate sleep, excessive alcohol use); environmental risks (eg, air pollution, extreme temperatures); and social risks (eg, house and food insecurity) also contribute to increased CVD burden and disparities in cardiovascular morbidity and mortality (10)
Several of the contextual risk factors attributed to increased CVD burden are covered in this special collection. Cigarette smoking persists among adults with chronic disease. Using data from the 2019 National Health Interview Survey (NHIS), Loretan and colleagues reported that more than 1 in 4 US adults aged 18 to 64 years with 1 or more chronic diseases associated with smoking were current smokers (11). The current cigarette smoking prevalence in the US reached 51.9% among adults aged 18 to 44 years with 2 or more chronic diseases (11). Furthermore, that study showed that smoking cessation services were not being provided to almost 1 in 3 people who have a chronic disease, leaving important steps to be taken toward successful smoking cessation in this population (11). Also concerning, rates of smoking vary significantly across countries, and approximately 1 billion people smoke globally, with significant negative implications for cardiovascular health (3). Goulding and colleagues used National Health and Nutrition Examination Survey data collected from 2011 through 2018 to provide estimates of the prevalence of high blood pressure among US children aged 8 to 17 years. The authors documented that elevated blood pressure was most prevalent among children who were older, male, or non-Hispanic Black, with factors beyond inequalities in body weight likely contributing to disparities in elevated blood pressure (12). Furthermore, a meta-analysis conducted by Qie and colleagues determined that a high level of HDL-C may provide a protective effect on the risk of total stroke and ischemic stroke but may increase the risk of intracerebral hemorrhage (13). Another meta-analysis by Zhang and colleagues found an L-shaped dose–response relationship between circulating vitamin D levels and the risk of hypertension; however, the pooled results of randomized controlled trials did not show vitamin D supplementation to be effective in preventing hypertension (14).
Studies in this collection also identified populations and communities with higher prevalence or at higher risk for CVD. In a cross-sectional study using 2018 NHIS data, Mendez and colleagues documented a higher prevalence of CVD and its risk factors among US adults with vision impairment (15). Salahuddin and colleagues documented zip code variations in infant mortality rates associated with a high prevalence of maternal cardiometabolic high-risk conditions (chronic or gestational diabetes, chronic or gestational hypertension, smoking during pregnancy, and prepregnancy obesity) in 2 counties in Texas (16). Findings from these articles could direct efforts to implement appropriate strategies to prevent, manage, and control CVD in populations at high risk.
Top
Exploring Factors Contributing to Disparities in CVD
CVD and its related risk factors are increasingly recognized as growing indicators of global health disparities (17). Globally, differences in morbidity and mortality from CVD exist among high-, middle-, and low-income countries and across ethnic groups (1,3,5,6,17,18). In the US, disparities in CVD morbidity, mortality, and risk factors have persisted for decades, with concerning stagnation and significant upward trends since the early 2000s (18). Disparities are largely influenced by demographic, socioeconomic, and environmental factors (19,20). For example, African American and American Indian adults experience a higher burden of cardiovascular risk factors and CVD compared with non-Hispanic White adults (18). Unfortunately, structural racism remains a significant cause of poor cardiovascular health, restricting racial and ethnic minority populations from opportunities to live healthier lives, in healthier neighborhoods, and from access to quality education and health care (20). | [question]
lately I have been trying to learn more about my uncles heart disease issues. Can you tell me what the article says about hypertension? Make it 380-400 words without referencing what it is caused by, or vitamin D
=====================
[text]
In recent years, researchers and public health programs and practices have focused on preventing, managing, and controlling traditional CVD risk factors by instituting timely intervention programs, identifying social determinants of health (SDOH), examining disparities in CVD risks, assessing the COVID-19 pandemic’s impact on CVD risks, and implementing collective efforts through community-based approaches to achieve population-level improvements in cardiovascular health. This special PCD collection of 20 articles published from January 2020 through November 2022 highlights some of these efforts by using multiple data sources collected before or during the pandemic. For instance, cigarette smoking and risk-enhancing factors related to pregnancy have been shown to increase CVD risks with significant implications (eg, increased infant mortality). Disparities in hypertension, stroke, and stroke mortality exist, exhibiting significant sociodemographic (eg, racial) and geographic (eg, rural–urban, county, zip code) variations. Intervention programs, such as behavioral modifications strengthening chronic disease awareness, use of self-measured blood pressure monitoring, and sodium intake reduction, are evaluated. The impact of COVID-19 on CVD is also explored. Finally, systematic reviews and meta-analyses evaluated the associations of circulating vitamin D levels, vitamin D supplementation, or high-density lipoprotein cholesterol (HDL-C) with blood pressure or stroke. These 20 articles advance our understanding of effective CVD risk management and intervention programs in multiple settings — in the general population and among high-risk groups — with a health equity lens across 3 broad themes further explored in this essay:
Examining factors contributing to CVD risk
Exploring factors contributing to disparities in CVD
Using community-based approaches to decrease CVDExamining the Factors Contributing to CVD Risk
The greatest contributors to CVD-related years of life lost globally are tobacco exposure, hypertension, high body mass index (BMI), and high fasting plasma glucose (3). Tobacco exposure, including cigarette smoking, secondhand smoke, and use of smokeless tobacco, contributed to 8.7 million deaths worldwide in 2019, one-third of which were due to CVD (3). Hypertension affects more than 4 billion people worldwide, representing a near doubling in the absolute prevalence of hypertension since 1990 (3). In the US, nearly half of adults (47%) have hypertension, but only about 1 in 4 (24%) have their condition under control (7). Elevated BMI continues to increase globally, with significant effects on death, disability, and quality of life (3). The prevalence of obesity has increased worldwide in the past 50 years, reaching pandemic levels. Obesity represents a major health challenge because it substantially increases the risk of diseases such as hypertension, myocardial infarction, stroke, type 2 diabetes, and dementia, thereby contributing to a decline in both quality of life and life expectancy (8). Furthermore, global increases in high fasting plasma glucose and its sequelae, type 2 diabetes, have mirrored the increases seen in BMI over the past 3 decades (9). Other behavioral risks (eg, unhealthy diet, physical inactivity, inadequate sleep, excessive alcohol use); environmental risks (eg, air pollution, extreme temperatures); and social risks (eg, house and food insecurity) also contribute to increased CVD burden and disparities in cardiovascular morbidity and mortality (10)
Several of the contextual risk factors attributed to increased CVD burden are covered in this special collection. Cigarette smoking persists among adults with chronic disease. Using data from the 2019 National Health Interview Survey (NHIS), Loretan and colleagues reported that more than 1 in 4 US adults aged 18 to 64 years with 1 or more chronic diseases associated with smoking were current smokers (11). The current cigarette smoking prevalence in the US reached 51.9% among adults aged 18 to 44 years with 2 or more chronic diseases (11). Furthermore, that study showed that smoking cessation services were not being provided to almost 1 in 3 people who have a chronic disease, leaving important steps to be taken toward successful smoking cessation in this population (11). Also concerning, rates of smoking vary significantly across countries, and approximately 1 billion people smoke globally, with significant negative implications for cardiovascular health (3). Goulding and colleagues used National Health and Nutrition Examination Survey data collected from 2011 through 2018 to provide estimates of the prevalence of high blood pressure among US children aged 8 to 17 years. The authors documented that elevated blood pressure was most prevalent among children who were older, male, or non-Hispanic Black, with factors beyond inequalities in body weight likely contributing to disparities in elevated blood pressure (12). Furthermore, a meta-analysis conducted by Qie and colleagues determined that a high level of HDL-C may provide a protective effect on the risk of total stroke and ischemic stroke but may increase the risk of intracerebral hemorrhage (13). Another meta-analysis by Zhang and colleagues found an L-shaped dose–response relationship between circulating vitamin D levels and the risk of hypertension; however, the pooled results of randomized controlled trials did not show vitamin D supplementation to be effective in preventing hypertension (14).
Studies in this collection also identified populations and communities with higher prevalence or at higher risk for CVD. In a cross-sectional study using 2018 NHIS data, Mendez and colleagues documented a higher prevalence of CVD and its risk factors among US adults with vision impairment (15). Salahuddin and colleagues documented zip code variations in infant mortality rates associated with a high prevalence of maternal cardiometabolic high-risk conditions (chronic or gestational diabetes, chronic or gestational hypertension, smoking during pregnancy, and prepregnancy obesity) in 2 counties in Texas (16). Findings from these articles could direct efforts to implement appropriate strategies to prevent, manage, and control CVD in populations at high risk.
Top
Exploring Factors Contributing to Disparities in CVD
CVD and its related risk factors are increasingly recognized as growing indicators of global health disparities (17). Globally, differences in morbidity and mortality from CVD exist among high-, middle-, and low-income countries and across ethnic groups (1,3,5,6,17,18). In the US, disparities in CVD morbidity, mortality, and risk factors have persisted for decades, with concerning stagnation and significant upward trends since the early 2000s (18). Disparities are largely influenced by demographic, socioeconomic, and environmental factors (19,20). For example, African American and American Indian adults experience a higher burden of cardiovascular risk factors and CVD compared with non-Hispanic White adults (18). Unfortunately, structural racism remains a significant cause of poor cardiovascular health, restricting racial and ethnic minority populations from opportunities to live healthier lives, in healthier neighborhoods, and from access to quality education and health care (20).
https://www.cdc.gov/pcd/issues/2022/22_0347.htm
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
You can only respond with the information in the context block. Please give your response in a simple tone that could be shared with a non-legal audience and easily understood. | Please summarize the determinations made in the text provided and explain the consequences of the rulings made. | In the United States Court of Federal Claims No. 13-821 (Filed: 2 January 2024) *************************************** INGHAM REG’L MEDICAL CENTER, * n/k/a MCLAREN GREATER LANSING, * et al., * * Plaintiffs, * * v. * * THE UNITED STATES, * * Defendant. * * Plaintiffs are six hospitals purporting to represent a class of approximately 1,610 hospitals across the United States in a suit requesting, among other things, the Court interpret what the Federal Circuit has deemed an “extremely strange” contract.1 This contract arose when hospitals complained the government underpaid reimbursements for Department of Defense Military Health System, TRICARE, outpatient services rendered between 2003 and 2009. In 2011, after completion of a data analysis, the government voluntarily entered a discretionary payment process contract with plaintiffs and offered net adjusted payments. In November 2022, after nine years of litigation and one Federal Circuit appeal, the Court granted in part and denied in part the government’s Motion for Summary Judgment. As the only surviving breach of contract claims concern the government’s duty to extract, analyze, and adjust line items from its 1 9 June 2022 Oral Arg. Tr. at 161:7–13, ECF No. 259 (“THE COURT: So the Federal Circuit panel, when the case was argued, characterized this agreement as extremely strange. [THE GOVERNMENT]: That is accurate. It is extremely strange. THE COURT: It is extremely strange? [THE GOVERNMENT]: It is.”). Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 1 of 23 - 2 - database, the Court required the parties to file a joint status report regarding the effect of summary judgment on plaintiffs’ Renewed Motion to Certify a Class Action. Following a status conference, plaintiffs filed a discovery motion related to class certification. For the following reasons, the Court grants-in-part and denies-in-part plaintiffs’ Motion. I. Background A. Factual and Procedural History2 TRICARE is a “military health care system” which “provides medical and dental care for current and former members of the military and their dependents.” Ingham Reg’l Med. Ctr. v. United States, 874 F.3d 1341, 1342 (Fed. Cir. 2017). TRICARE Management Activity (TMA), a “field office in the Defense Department [(DoD)],” managed the TRICARE system.3 N. Mich. Hosps., Inc. v. Health Net Fed. Servs., LLC, 344 F. App’x 731, 734 (3d Cir. 2009). In 2001, Congress amended the TRICARE statute to require DoD to follow Medicare rules when reimbursing outside healthcare providers. Ingham Reg’l Med. Ctr., 874 F.3d at 1343 (citing 10 U.S.C. § 1079(j)(2) (2002)). To facilitate transition to Medicare rules, in 2005, DoD issued a Final Rule which specified “[f]or most outpatient services, hospitals would receive payments ‘based on the TRICARE-allowable cost method in effect for professional providers or the [Civilian Health and Medical Program of the Uniformed Services] (CHAMPUS) Maximum Allowable Charge (CMAC).’” Id. (quoting TRICARE; Sub-Acute Care Program; Uniform Skilled Nursing Facility Benefit; Home Health Care Benefit; Adopting Medicare Payment Methods for Skilled Nursing Facilities and Home Health Care Providers, 70 Fed. Reg. 61368, 61371 (Oct. 24, 2005) (codified as amended at 32 C.F.R. § 199)). The TRICARE-allowable cost method “applied until 2009, when TRICARE introduced a new payment system for hospital outpatient services that was similar to the Medicare [Outpatient Prospective Payment System (OPPS)].” Id. In response to hospital complaints of payment issues, TRICARE hired Kennell and Associates, a consulting firm, to “undertake a study [(‘Kennell study’)] of the accuracy of its payments to the hospitals.” Ingham Reg’l Med. Ctr., 874 F.3d at 1343–44. The Kennell study “compared CMAC payments to the payments that would have been made using Medicare payment principles, and determined that DoD ‘(1) underpaid hospitals for outpatient radiology but, (2) correctly paid hospitals for all other outpatient services.’” Id. at 1344 (emphasis omitted) (citation omitted). From the Kennell study findings, “DoD created a discretionary payment process [(DPP)],” and, on 25 April 2011, DoD notified hospitals by letter of the process for them to “request a review of their TRICARE reimbursements (the ‘Letter’)” and “published a document titled ‘NOTICE TO HOSPITALS OF POTENTIAL ADJUSTMENT TO PAST PAYMENTS FOR OUTPATIENT RADIOLOGY SERVICES’ (the ‘Notice’)” on the TRICARE website. Id.; App. to Def.’s MSJ at A3–A9, ECF No. 203-1. The Notice described a nine-step methodology to “govern the review of payments for hospital outpatient radiology services and [the] payment 2 The factual and procedural history in this Order contains only those facts pertinent to plaintiffs’ Motion for Discovery, ECF No. 269. 3 The Defense Health Agency now manages activities previously managed by TMA. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 2 of 23 - 3 - of any discretionary net adjustments” by which hospitals could “request an analysis of their claims data for possible discretionary adjustment.” App. to Def.’s MSJ at A7. On 21 October 2013, plaintiffs brought this action claiming the government underpaid them for certain outpatient medical services they provided between 1 August 2003 and 1 May 2009. See Ingham Reg’l Med. Ctr. v United States, 126 Fed. Cl. 1, 9 (2016), aff’d in part, rev’d in part, 874 F.3d 1341 (Fed. Cir. 2017). Plaintiffs allege the approximately six years of underpayment breached two contracts and violated various statutory and regulatory provisions. Id. Plaintiffs estimate several thousand hospitals submitted requests for discretionary payment, including the six named plaintiffs in this case. See id. at 16. Plaintiffs therefore seek to represent a class of as many as 1,610 similarly situated hospitals. See Pls.’ Mem. in Supp. of Mot. to Certify at 1, ECF No. 77; see also Mot. to Certify, ECF No. 76. On 11 February 2020, during the parties’ second discovery period, plaintiffs requested from the government “[a]ny and all data concerning hospital outpatient services claims and TRICARE reimbursement for hospital outpatient services claims during the relevant time period . . . .” See App. to Pls.’ Disc. Mot. at 23, ECF No. 269; Gov’t’s Disc. Resp. at 11, ECF No. 270. The government rejected this request for records from “thousands of hospitals . . . that are not [named] plaintiffs” on 16 March 2020 and instead only “produce[d] the data requested for the six plaintiffs in this lawsuit.” App. to Pls.’ Disc. Mot. at 29. Plaintiffs filed a motion to clarify the case schedule or, in the alternative, to compel discovery of “data and documents relating to the [g]overnment’s calculation of payments under the [DPP] for all putative class members, not just the named [p]laintiffs” on 31 July 2020, the last day of discovery. See Pls.’ Mot. to Compel (“Pl.’s MTC”) at 2, ECF No. 161 (emphasis added). In response, the government stated, “[t]here is no basis for the Court to . . . compel extraneous discovery of hospitals that are not now in this lawsuit.” Def.’s Resp. to Pl.’s MTC (“Def.’s MTC Resp.”) at 2, ECF No. 166. During a status conference on 13 October 2020, the parties agreed to table plaintiffs’ discovery request and associated Motion to Compel pending resolution of the government’s then-pending Motion for Reconsideration, ECF No. 150, and any additional potentially dispositive motions. See 13 Oct. 2020 Tr. (“SC Tr.”) at 27:13–28:9, ECF No. 178 (“THE COURT: . . . So to state differently, then, [plaintiffs agree] to stay consideration of this particular [discovery] issue until class certification is decided? [PLAINTIFFS:] Yes, that would be fine. THE COURT: . . . [W]ould the [g]overnment agree with that? [THE GOVERNMENT:] Yes, [y]our [h]onor . . . [but] the [g]overnment still intends to file a motion for summary judgment. . . . THE COURT: Okay. So on the [g]overnment’s motion for summary judgment . . . that should probably not be filed until at least after the motion for reconsideration is resolved? [THE GOVERNMENT:] That’s correct.”). On 5 June 2020, plaintiffs filed a renewed motion to certify a class and appoint class counsel (“Pls.’ Class Cert.”), ECF No. 146, which the parties fully briefed. See Def.’s Class Cert. Resp., ECF No. 207; Pls.’ Class Cert. Reply, EF No. 226. On 26 August 2021, the government filed a motion for summary judgment (“Def.’s MSJ”), ECF No. 203. Plaintiffs filed an opposition to the government’s motion for summary judgment on 4 February 2022 (“Pls.’ MSJ Resp.”), ECF No. 225, and on 11 March 2022, the government filed a reply (“Def.’s MSJ Reply”), ECF No. 234. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 3 of 23 - 4 - “The Court [granted] the government’s [M]otion for [S]ummary [J]udgment as to plaintiffs’ hospital-data duty and mutual mistake of fact claims but [denied] the government’s [M]otion as to plaintiffs’ TMA-data duty and alternate zip code claims[,] . . . [and stayed] the evidentiary motions” on 28 November 2022. Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 428 (2022). The Court, deeming the government’s settlement arrangements with plaintiffs to be contracts (the “DPP Contracts”), specifically found “the DPP Contract[s] only obligated TMA to use its data, not the hospitals’ data,” leaving the government’s data as the only set relevant to this case. Id. at 427. The Court held the government’s (1) “failure to extract thirteen line items [meeting all qualifications for extraction] for Integris Baptist and Integris Bass Baptist”; and (2) failure to adjust “five . . . line items [for Integris Baptist] during the DPP because of an alternate zip code” constituted breach of the DPP Contracts. Id. at 409–10, 412. “Based on the summary judgment holding . . . the Court [found it needed] further information regarding how plaintiffs in this post-summary judgment smaller class would meet the requirements for class certification.” Id. “The Court accordingly decline[d] to rule on plaintiffs’ class certification motion . . . [a]s the only surviving claims are breach of contract for failure to follow the DPP in a few limited circumstances, [and] the parties did not adequately brief the narrower proposed class of plaintiffs arising under the remaining claims.” Id. The Court ordered the parties to file “a joint status report [(JSR)] providing the parties’ views on class certification for the smaller class of plaintiffs affected by the government’s breach of contract for failure to follow the DPP in limited circumstances and on whether further briefing is necessary.” Id. On 28 December 2022, the parties filed a JSR providing their opposing positions on whether plaintiffs can request further discovery related to class certification: “plaintiffs expressly reserve, and do not waive, any rights that they may currently have, or may have in the future, with respect to additional class certification fact or expert discovery”; and “the [g]overnment opposes any further fact or expert discovery in connection with plaintiffs’ amended/supplemental motion for class certification, and, in agreeing to the foregoing briefing schedule, is not agreeing to any further fact or expert discovery in this case.” 28 Dec. 2022 JSR at 2, ECF No. 262. Plaintiffs then filed a motion requesting leave to conduct further discovery and submit a supplemental expert report on 21 March 2023 (“plaintiffs’ Discovery Motion”). Pls.’ Disc. Mot., ECF No. 269. The government filed a response on 21 April 2023. Gov’t’s Disc. Resp. Plaintiffs filed a reply on 9 May 2023. Pls.’ Disc. Reply, ECF No. 271. The Court held oral argument on 19 July 2023. See 5 June 2023 Order, ECF No. 272; 19 July 2023 Oral Arg. Tr. (“Tr.”), ECF No. 276. On 31 August 2023, following oral argument on plaintiffs’ Discovery Motion, the government filed an unopposed motion to stay the case for the government to complete a “second look at the records [analyzed] . . . in the July 2019 expert report of Kennell . . . that were the subject of one of the Court’s liability rulings on summary judgment.” Def.’s Mot. to Stay at 1, ECF No. 277. The Court granted this Motion on the same day. Order, ECF No. 278. On 25 October 2023, the parties filed a JSR, ECF No. 284, in which the government addressed its findings4 and “proposed [a] way forward” in this case. 25 Oct. 2023 JSR at 2. In 4 In the 25 October 2023 JSR, the government explained twelve of the thirteen line items the government failed to extract for Integris Baptist and Integris Bass Baptist, see Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 412 (2022), were missed due to a “now-known” error in which “a very small set of patients comprised of military spouses . . . under age 65” were overlooked because they “receive Medicare Part A” but not Medicare Part B, Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 4 of 23 - 5 - response to the government’s data analysis, plaintiffs noted in the JSR “the [g]overnment’s update makes clear that [additional g]overnment data is required to evaluate which hospitals were affected by the [g]overnment’s breach of the DPP.” Id. at 16. Plaintiffs then likewise “[p]roposed [n]ext [s]teps” in this case, beginning with resolution of their Discovery Motion. Id. at 18. On 19 December 2023, the Court held a telephonic status conference to understand the technical aspects of plaintiffs’ discovery requests as they relate to the DPP process and algorithm. See Scheduling Order, ECF No. 285. B. Discovery Requests at Issue Plaintiffs seek leave to perform additional discovery stemming from the Court’s summary judgment holding “TMA [breached its duty] . . . to extract, analyze, and adjust radiology data from its database” by failing to (1) adjust “five . . . line items [for Integris Baptist] during the DPP because of an alternate zip code” and (2) “extract . . . thirteen line items [meeting the criteria for extraction] for Integris Baptist and Integris Bass Baptist.” Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 409–10, 412. Plaintiffs’ sought-after discovery includes a request for “the same data for the [putative] class hospitals” as plaintiffs currently “have [for the] six named [p]laintiffs,” Tr. at 50:14–19, to assist plaintiffs in identifying “line items in [TMA’s radiology data] . . . that met the [DPP C]ontract criteria but were excluded from the adjustment . . . .” Gov’t’s Disc. Resp. at 15. In all, plaintiffs “seek leave to (1) depose a [g]overnment corporate designee, (2) serve document requests, and (3) thereafter serve a supplemental expert report on the relevant class issues.” Pls.’ Disc. Mot. at 2. Plaintiffs further detail the purpose of each request: First, [p]laintiffs seek leave to depose a [g]overnment corporate designee to identify the various data sources in the [g]overnment’s possession from the relevant time period. Second, [p]laintiffs seek leave to serve . . . document requests to obtain critical data related to each Potential Class member hospital. Third, once the above discovery is completed, [p]laintiffs seek leave to serve a supplemental expert report that applies the DPP methodology to the relevant claims data to identify the Final Class. Id. at 6–7 (footnote omitted) (citations omitted). The second request, mirroring plaintiffs’ February 2020 request for “[a]ny and all data concerning hospital outpatient service claims and TRICARE reimbursement for hospital outpatient services claims during the relevant time period,” App. to Pl.’s Disc. Mot. at 23, comprises “(1) all information Kennell used in the DPP calculations for each Potential Class member hospital, as well as the underlying calculations[;] and (2) all hospital outpatient claims meaning the “outpatient services that these individuals receive are paid for . . . by TRICARE.” 25 Oct. JSR at 4, ECF No. 284. As a result of this Medicare arrangement, line items for this group of patients were not extracted as the individuals were mistakenly deemed Medicare, rather than TRICARE, recipients for procedures within the scope of the DPP. Id. The cause of the thirteenth unextracted line item remains unclear. Id. at 5. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 5 of 23 - 6 - data available for each of the Potential Class member hospitals during the relevant time period.” Pl.’s Disc. Mot. at 8–9. This request includes: (1) CMAC rate files “needed to apply the DPP methodology”; (2) “[d]ata on hospital outpatient radiology services claim line items for each Potential Class member hospital”; (3) “[d]ata concerning hospital outpatient services claim line items for each Potential Class member hospital” to verify the radiology files are complete; and (4) “TRICARE Encounter Data (‘TED’) records and Health Care Service Records (‘HCSR’).” Id. at 9–10. II. Applicable Law This court’s application of the Rules of the United States Court of Federal Claims (“RCFC”) is guided by case law interpreting the Federal Rules of Civil Procedure (FRCP). See RCFC rules committee’s note to 2002 revision (“[I]nterpretation of the court’s rules will be guided by case law and the Advisory Committee Notes that accompany the Federal Rules of Civil Procedure.”). Regarding the scope of discovery, the rules of this court provide: [P]arties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case, considering the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit. RCFC 26(b)(1). “Questions of the scope and conduct of discovery are . . . committed to the discretion of the trial court.” Florsheim Shoe Co., Div. of Interco, Inc. v. United States, 744 F.2d 787, 797 (Fed. Cir. 1984). The Court of Federal Claims generally “afford[s] a liberal treatment to the rules of discovery.” Securiforce Int’l Am., LLC v. United States, 127 Fed. Cl. 386, 400 (2016), aff’d in part and vacated in part on other grounds, 879 F.3d 1354 (Fed. Cir. 2018), cert. denied, 139 S. Ct. 478 (2018) (mem.). “[T]he [C]ourt must be careful not to deprive a party of discovery that is reasonably necessary to afford a fair opportunity to develop and prepare the case.” Heat & Control, Inc. v. Hester Indus., Inc., 785 F.2d 1017, 1024 (Fed. Cir. 1986) (quoting FED. R. CIV. P. 26(b)(1) advisory committee’s note to 1983 amendment). Further, “[a] trial court ‘has wide discretion in setting the limits of discovery.’” Schism v. United States, 316 F.3d 1259, 1300 (Fed. Cir. 2002) (quoting Moore v. Armour Pharm. Co., 927 F.2d 1194, 1197 (11th Cir. 1991)). This court has previously found such “discretion [extends to] . . . deciding whether to grant a motion to . . . reopen discovery.” Croman Corp. v. United States, 94 Fed. Cl. 157, 160 (2010) (citing Te-Moak Bands of W. Shoshone Indians of Nev. v. United States, 948 F.2d 1258, 1260 (Fed. Cir. 1991)); see also Wordtech Sys., Inc. v. Integrated Networks Sols., Inc., 609 F.3d 1308, 1322–23 (Fed. Cir. 2010) (citing Coleman v. Quaker Oats Co., 232 F.3d 1271, 1294 (9th Cir. 2000)) (applying Ninth Circuit law in determining trial court did not abuse its discretion in refusing to reopen discovery). Notwithstanding, modification of a court-imposed schedule, including a discovery schedule, may be done “only for good cause and with the judge’s consent.” RCFC 16(b)(4). In Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 6 of 23 - 7 - High Point Design, the Federal Circuit applied Second Circuit law5 when discussing the good cause standard of FRCP 16(b)(4) for amending a case schedule. “When assessing whether good cause has been shown, ‘the primary consideration is whether the moving party can demonstrate diligence.’” High Point Design LLC v. Buyers Direct, Inc., 730 F.3d 1301, 1319 (Fed. Cir. 2013) (quoting Kassner v. 2nd Ave. Delicatessen, Inc., 496 F.3d 229, 244 (2d Cir. 2007)); see also Adv. Software Design Corp. v. Fiserv, Inc., 641 F.3d 1368, 1381 (Fed. Cir. 2011) (“Under the good cause standard, the threshold inquiry is whether the movant has been diligent.” (citing Sherman v. Winco Fireworks, Inc., 532 F.3d 709, 717 (8th Cir. 2008))). This “requires a showing that even with the exercise of due diligence the moving party could not meet the order’s timetable.” Slip Track Sys., Inc. v. Metal-Lite, Inc., 304 F.3d 1256, 1270 (Fed. Cir. 2002) (applying Ninth Circuit law in finding defendant’s attempt to amend the pleadings first required modification of the scheduling order under FRCP 16(b)(4)). Trial courts may also consider “other relevant factors including, in particular, whether allowing the amendment . . . will prejudice [the opposing party].” High Point Design, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); 6A WRIGHT ET AL., FEDERAL PRACTICE AND PROCEDURE § 1522.2 (3d ed. 1998) (“What constitutes good cause sufficient to justify the modification of a scheduling order necessarily varies with the circumstances of each case.”). Prejudice related to the reopening of discovery may involve the delay of proceedings. Wordtech Sys., 609 F.3d at 1322 (“[A] need to reopen discovery and therefore delay the proceedings supports a district court’s finding of prejudice from a delayed motion to amend the complaint.” (quoting Lockheed Martin Corp. v. Network Sols., Inc., 194 F.3d 980, 986 (9th Cir. 1999))). III. Parties’ Arguments Plaintiffs contend “good cause exists for [p]laintiffs to conduct . . . additional limited discovery,” Pls.’ Disc. Mot. at 7–8 (citing Geneva Pharms. Tech. Corp. v. Barr Lab’ys, Inc., Nos. 98 Civ. 861, 99 Civ. 3687, 2005 WL 2132438, at *5 (S.D.N.Y. Sept. 6, 2005)), largely mirroring their “live and pending request for discovery [from February 2020] that[ ha]s been tabled” since October 2020, Tr. at 37:16–17; see SC Tr. at 27:13–23. Specifically, plaintiffs argue, “[t]he Court’s ruling on the [g]overnment’s Summary Judgment Motion is a material event that has fundamentally altered the scope of this case.” Pl.’s Disc. Mot. at 7–8 (citing Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 428 (2022) (“[T]he parties did not adequately brief the narrower proposed class of plaintiffs arising under the remaining claims.”)). Plaintiffs state the requested “discovery will allow them to provide the Court with the information required for the determination of the Final Class, and that this will greatly assist the Court with its ruling on class certification.” Id. at 10. At oral argument, the government acknowledged “[p]laintiffs[’] [2020] request [for] all class hospital data” concerned much of the same information plaintiffs are “asking for now.” Tr. at 103:7–17. The government, however, maintains “plaintiffs’ motion to reopen fact and expert discovery should be denied.” Gov’t’s Disc. Resp. at 17. Specifically, the government argues plaintiffs “filed this case, moved for class certification twice, and proceeded through two full rounds of fact and expert discovery, based upon . . . [p]laintiffs’ view of the law.” Id. at 16. The 5 RCFC 16(b)(4) is identical to the corresponding Rule 16(b)(4) of the Federal Rules of Civil Procedure. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 7 of 23 - 8 - government therefore argues plaintiffs should not be permitted to “reopen fact and expert discovery” simply because “on summary judgment,” the “legal theories that animated plaintiffs’ [previous] discovery and expert reports have been shown . . . to be . . . wrong.” Id. at 16–17. The government argues “[a] party’s realization that it elected to pursue the wrong litigation strategy is not good cause for amending a schedule,” so plaintiffs have failed to show good cause to reopen discovery as they request. Gov’t’s Disc. Resp. at 17 (quoting Sys. Fuels, Inc. v. United States, 111 Fed. Cl. 381, 383 (2013)). Alluding to the standard for reopening discovery, the government argues “no actions by plaintiffs . . . even remotely approximate the showing of diligence required under RCFC 16 . . . .” Id. at 35. The government also argues plaintiffs’ requests “overwhelming[ly] and incurabl[y] prejudice . . . the [g]overnment.” Id. at 38. IV. Whether Good Cause Exists to Reopen Discovery As noted supra Section III, plaintiffs contend “good cause exists for [p]laintiffs to conduct . . . additional limited discovery,” Pls.’ Disc. Mot. at 7–8 (citing Geneva Pharms. Tech. Corp. v. Barr Lab’ys, Inc., Nos. 98 Civ. 861, 99 Civ. 3687, 2005 WL 2132438, at *5 (S.D.N.Y. Sept. 6, 2005)), largely mirroring their “live and pending request for discovery [from February 2020] that[ ha]s been tabled” since October 2020, when the parties agreed to first proceed with the government’s Motion for Reconsideration and Motion for Summary Judgment. Tr. at 37:16– 17; see SC Tr. at 27:13–23. Plaintiffs believe the requested “discovery will allow them to provide the Court with the information required for the determination of the Final Class, and that this will greatly assist the Court with its ruling on class certification.” Pl.’s Disc. Mot. at 10. In contrast, the government asserts plaintiffs have, in two previous rounds of discovery and in their summary judgment briefing, chosen to pursue a litigation strategy based on a class damages model relying on hospital and government data and cannot now justify reopening discovery because they need to change tactics following the Court’s summary judgment ruling limiting the scope of this case to the government’s data. See Gov’t’s Disc. Resp. at 22–23. Specifically, the government contends plaintiffs have neither made the required showing of diligence during past discovery periods to justify modifying the Court’s discovery schedule nor adequately refuted the government’s claim this discovery is prejudicial. See Gov’t’s Disc. Resp. at 28, 35. “A trial court ‘has wide discretion in setting the limits of discovery.’” Schism v. United States, 316 F.3d 1259, 1300 (Fed. Cir. 2002) (quoting Moore v. Armour Pharm. Co., 927 F.2d 1194, 1197 (11th Cir. 1991)). This court has previously found such “discretion [is applicable] in deciding whether to grant a motion to . . . reopen discovery.” Croman Corp. v. United States, 94 Fed. Cl. 157, 160 (2010) (citing Te-Moak Bands of W. Shoshone Indians of Nev. v. United States, 948 F.2d 1258, 1260 (Fed. Cir. 1991)). RCFC 16(b)(4) permits modification of a court-imposed schedule, such as to re-open discovery, “only for good cause and with the judge’s consent.”6 Good cause “requires a showing that even with the exercise of due diligence the moving party could not meet the order’s timetable.” Slip Track Sys., Inc. v. Metal-Lite, Inc., 304 F.3d 1256, 1270 (Fed. Cir. 2002) (applying Ninth Circuit law in finding defendant’s attempt to amend the 6 At oral argument, the parties agreed plaintiffs are requesting the Court reopen discovery, meaning this good cause standard applies. Tr. 99:14–19: “[PLAINTIFFS:] I think, as between [supplementation and reopening], th[ese requests] probably fit[] better in the reopening category as between those two . . . . THE COURT: So . . . the standard for reopening is good cause? [PLAINTIFFS:] Yes. THE COURT: [The government], [do] you agree? [THE GOVERNMENT:] I agree.” Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 8 of 23 - 9 - pleadings first required modification of the scheduling order under FRCP 16(b)(4)). Likewise, in determining whether good cause exists to reopen discovery, a trial court may consider “other relevant factors including, in particular, whether allowing the amendment . . . will prejudice [the opposing party].” High Point Design LLC v. Buyers Direct, Inc., 730 F.3d 1301, 1319 (Fed. Cir. 2013) (quoting Kassner v. 2nd Ave. Delicatessen, Inc., 496 F.3d 229, 244 (2d Cir. 2007)). The Court accordingly must determine whether good cause exists to reopen discovery as requested by plaintiffs by analyzing plaintiffs’ diligence and whether the requested discovery prejudices the government. The Court begins with plaintiffs’ document requests. A. Document Requests Plaintiffs request the government turn over “critical data related to each Potential Class member hospital” and argue “denying ‘precertification discovery where it is necessary to determine the existence of a class is an abuse of discretion.’” Pls.’ Disc. Mot. at 6–7; Pls.’ Disc. Reply at 2 (quoting Perez v. Safelite Grp. Inc., 553 F. App’x 667, 669 (9th Cir. 2014)). These document requests specifically target “(1) all information Kennell used in the DPP calculations for each Potential Class member hospital, as well as the underlying calculations, and (2) all hospital outpatient claims data available for each of the Potential Class member hospitals during the relevant time period.” Pls.’ Disc. Mot. at 8–9. Plaintiffs’ goal is to acquire all “radiology line item[]” data and other information necessary to “apply the DPP methodology” to all of the putative class members’ claims data from the DPP period. Id. at 9. Plaintiffs contend “good cause exists” for the Court to reopen discovery with respect to these documents because the “Court’s ruling on the [g]overnment’s Summary Judgment Motion is a material event that . . . fundamentally altered the scope of this case.” Id. at 8. Namely, plaintiffs’ “damages are now limited to those claims involving errors in the [g]overnment’s data,” so plaintiffs allege this data, which “by its very nature [is] exclusively in [the government’s] possession,” is necessary “to identify the class members.” Pls.’ Disc. Reply at 3; see Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 427–29 (2022). Further, plaintiffs believe reopening discovery for this request is appropriate because in February 2020, during discovery, plaintiffs served a request for production on the government for the same “data concerning hospital outpatient services, claims, and . . . reimbursement for all class hospitals.” Tr. at 41:5–11. Plaintiffs likewise moved to compel this discovery in July 2020. Pls.’ Disc. Reply at 4 (“Plaintiffs also later moved for an order to conduct class discovery or for the [g]overnment to alternatively produce documents for all hospitals.”). Plaintiffs argue tabling this request and motion at the end of October 2020 while the case “proceeded with reconsideration, summary judgment, and other procedural” items did not do away with their “live and pending request for [this] discovery.” Tr. at 41:16–17, 37:14– 25, 128:5–6. With respect to prejudice, plaintiffs clarify their requests “will not prejudice the” government primarily because “the benefit to this case from the discovery would significantly outweigh any burden,” Pls.’ Disc. Reply at 8–9 (first citing Davita HealthCare Partners, Inc. v. United States, 125 Fed. Cl. 394, 402 n.6 (2016); and then citing Kennedy Heights Apartments Ltd. I v. United States, 2005 WL 6112633, at *4 (Fed. Cl. Apr. 26, 2005)), as this discovery will “assist the court with its ruling on class certification.” Pls.’ Disc. Mot. at 9–10. Further, plaintiffs contend any prejudice could be cured at trial by cross-examination of plaintiffs’ expert, who will use this data in a future supplemental report. Pls.’ Disc. Reply at 10 (citing Panasonic Commc’ns Corp. of Am. v. United States, 108 Fed. Cl. 412, 416 (2013)). Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 9 of 23 - 10 - The government argues good cause does not exist to reopen discovery as requested by plaintiffs. With respect to diligence, the government first asserts plaintiffs’ 31 July 2020 Motion regarding class discovery was not diligent because it was filed on the last day of the discovery period. Gov’t’s Disc. Resp. at 33–34. Next, the government argues “[p]laintiffs make no claim that it was impossible during two separate discovery periods for them to have served on the [g]overnment . . . the proposed discovery requests . . . because it obviously was not.” Id. at 28 (citation omitted). Rather, per the government, “plaintiffs disregarded, rather than responding to, evidence, analysis, and law that was inconsistent with their theories of the case, even when the [g]overnment brought such issues to the fore.” Id. at 30. The government argues “plaintiffs ignored these issues at their peril throughout the entire second period of fact and expert discovery that followed, and that means that they were not diligent under the law” and are not now entitled to discovery to assist them in changing their theory of the case. Id. at 31–32. Concerning prejudice, the government argues “plaintiffs are proposing fact discovery on a scale never before undertaken in this case, a new expert report for the [g]overnment to then respond to, more expert depositions, and, no doubt, additional Daubert and class-related motions practice, resulting in substantial delay in bringing this matter to resolution.” Id. at 36 (footnote omitted). “Permitting plaintiffs to now evade a long overdue reckoning, and attempt to moot [the government’s motions to exclude plaintiffs’ expert reports], in addition to being completely contrary to law, [according to the government,] deprives the [g]overnment of its day in court for what should be an imminent resolution of this matter.” Id. 1. Diligence A finding of diligence sufficient to modify a case schedule “requires a showing that even with the exercise of due diligence the moving party could not meet the order’s timetable.” Slip Track Sys., Inc., 304 F.3d at 1270 (applying Ninth Circuit law in finding defendant’s attempt to amend the pleadings first required modification of the scheduling order under FRCP 16(b)(4)). On 11 February 2020, at the very early stages of the “re-opened period of fact discovery,” plaintiffs “served the [g]overnment with additional document requests,” including a request for “[a]ny and all data concerning hospital outpatient services claims and TRICARE reimbursement for hospital outpatient services claims during the relevant time period . . . .” Gov’t’s Disc. Resp. at 11 (citations omitted); see also App. to Pls.’ Disc. Mot. at 23. At the time, the government “objected to this request” and only “produce[d] the data requested for the six [named] plaintiffs.” Id. at 11–12 (citations omitted). Over the next several months, the parties continued with fact and expert discovery, during which time the Court “established a schedule for briefing on class certification and summary judgment.” Id. at 13 (citing Order at 2, ECF No. 143). On 31 July 2020, “the date . . . both fact and expert discovery closed, plaintiffs filed a motion . . . [to] compel[] the [g]overnment to produce documents for all hospitals, rather than for just the six representative plaintiffs.” Id. at 14. Plaintiffs therefore requested the data at issue in this document request at least twice before the instant Motion—once on 11 February 2020 and again on 31 July 2020. Tr. at 81:10–11 (“[PLAINTIFFS:] [W]e did ask for all of those things that [the government is] talking about [before we t]abled the issues . . . .”). They thus argue they “meet [the] diligence [standard] here because [they] asked for” this information “a long time ago” and continued to believe it “was a live and open issue.” Tr. at 128:5–6; Tr. at 104:25–105:9 (“[PLAINTIFFS:] We served the discovery request in the discovery period. We got objections from the [g]overnment. We conferred several times with [the government] about this to figure Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 10 of 23 - 11 - out what we are doing here . . . . We then had the conference with the Court because we had filed our motion [to compel] and the [g]overnment fought to arrange things in the way they arranged. So we walked away from that believing this [discovery] was a live and open issue.”); see 28 Dec. 2022 JSR at 2. The government’s first diligence argument, as noted supra Section IV.A, is plaintiffs filed their Motion to Compel on the final day of discovery and thus did not diligently pursue this request. Gov’t’s Disc. Resp. at 33–34; Tr. at 84:21–23 (“[THE GOVERNMENT:] [Plaintiffs] did nothing between March and July. There was no agreement to table during that four-month period. And then in July, they filed a motion [to compel.]”). Despite the already pending 11 February 2020 “request for all class hospital data,” the government contends plaintiffs “should have filed more motions to compel” earlier in the discovery period. Tr. at 85:4–10; Tr. at 107:2– 5 (“[PLAINTIFFS:] [The government is saying] we raised [these discovery issues] too long ago and didn’t come back often enough.”). To the extent the government alleges “filing a motion to compel on the very last day of discovery is . . . untimely, not diligent,” however, the government overlooks the significance of plaintiffs’ timely February 2020 request. See Gov’t’s Disc. Resp. at 34. Plaintiffs did not first make this request the day discovery closed; they asked the government to produce these documents early in the discovery period. Pls.’ Disc. Mot. at 4–5. Plaintiffs then “conferred several times with” the government and waited to see whether the government’s production would be sufficiently responsive to their February 2020 request despite the government’s objection. Tr. at 104:24–105:6 (“[PLAINTIFFS]: We served the discovery request in the discovery period. We got objections from the [g]overnment. We conferred several times with [the government] about this . . . We then . . . filed our motion. . . .”). Thus, only when it became clear the government was not going to produce plaintiffs’ requested information or any comparable data in the final days of the discovery period did plaintiffs file a motion to compel. Id. Further, the government’s cited cases for the proposition motions filed at the end of discovery are untimely are from out-of-circuit district courts and contain factual situations inapposite to this case. See Gov’t’s Disc. Resp. at 34 (first citing Rainbow Energy Mktg. Corp. v. DC Transco, LLC, No. 21-CV-313, 2022 WL 17365260, at *2 (W.D. Tex. Dec. 1, 2022) (denying a renewed motion to compel after: (1) the plaintiff’s initial motion was denied, (2) the plaintiff filed a motion to extend discovery after the period had closed, and (3) the plaintiff filed a renewed motion to compel on the last day of extended discovery); then citing U.S. ex rel. Gohil v. Sanofi U.S. Servs., Inc., No. 02-2964, 2020 WL 1888966, at *4 (E.D. Pa. Apr. 16, 2020) (rejecting a motion to compel in part because the requesting party made a “misrepresentation that it did not know” the importance of the information until just before the close of discovery); then citing Summy-Long v. Pa. State Univ., No. 06–cv–1117, 2015 WL 5924505, at *2, *5 (M.D. Pa. Oct. 9. 2015) (denying the plaintiff’s motion to compel “because [her] request [wa]s overly broad and unduly burdensome and because granting further discovery extensions . . . would strain the bounds of reasonableness and fairness to all litigants”); then citing In re Sulfuric Acid Antitrust Litig., 231 F.R.D. 331, 332–33, 337 (N.D. Ill. 2005) (acknowledging there is “great[] uncertainty” as to whether courts should deny motions to compel filed “very close to the discovery cut-off date” and recognizing “the matter is [generally] left to the broad discretion” of the trial court “to control discovery”); then citing Toone v. Fed. Express Corp., No. Civ. A. 96-2450, 1997 WL 446257, at *8 (D.D.C. July 30, 1997) (denying the plaintiff’s motion to compel filed on the last day of discovery because (1) given the close proximity to the original date for trial, “the defendant could have responded to the request . . . on Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 11 of 23 - 12 - the day of the original trial date,” and (2) it was moot); and then citing Babcock v. CAE-Link Corp., 878 F. Supp. 377, 387 (N.D.N.Y. 1995) (denying a motion to compel regarding discovery requests served on the last day of discovery). The Court therefore is not persuaded plaintiffs’ Motion to Compel was untimely. Schism, 316 F.3d at 1300 (quoting Moore, 927 F.2d at 1197). The government further contends “[p]laintiffs make no claim that it was impossible during two separate discovery periods for them to have served on the [g]overnment” this “proposed discovery request[].” Def.’s Disc. Resp. at 28. The government argues plaintiffs’ “tunnel vision” with respect to their legal theory caused plaintiffs to ignore “evidence, analysis, and law” not directly consistent with “their theories of the case, even when the [g]overnment brought such issues to the fore.” Id. at 29–30. “Turning a blind eye . . . [due to] legal error is not the same thing as having the inability to meet court deadlines,” according to the government, so “plaintiffs cannot demonstrate the requisite diligence.” Id. at 30. Although plaintiffs did not file “more motions to compel,” plaintiffs timely made their February 2020 request and timely filed their July 2020 Motion to Compel, supra. See Tr. at 85:4–9. To the extent the government alleges plaintiffs are not entitled to reopen discovery to amend their litigation strategy because the government “unmasked on summary judgment” plaintiffs’ “legal errors,” the government overlooks its own admission at oral argument, “[p]laintiffs’ request[s] [for] all class hospital data” in February and July 2020 sought the same data “[plaintiffs a]re asking for now.” Tr. 103:7–17; Def.’s Disc. Resp. at 28. Contrary to the government’s argument, plaintiffs therefore did not have “tunnel vision” causing them to ignore the requested evidence earlier in this litigation. See Def.’s Disc. Resp. at 28–30. Rather, plaintiffs requested this data during the appropriate discovery periods, only to have their request put on hold “because the [g]overnment ha[d] additional motions” it wished the Court to first decide. See Tr. at 85:12–21 (the court); Pls.’ Disc. Mot. at 4; App. to Pl.’s Disc. Mot. at 23; Tr. at 105:5–9; Def.’s Disc. Resp. at 14 (“Ultimately, the issues raised by this motion [to compel] were tabled by agreement of the parties.”); Tr. at 55:5–6 (“[PLAINTIFFS:] [T]he [g]overnment fought tooth and nail [to have the Court] hear [their] summary judgment motion first.”). Plaintiffs have thus considered these requests “a live and open issue” pending resolution of the government’s motions ever since, prompting them to file the instant Motion upon the Court issuing its Summary Judgment Order in November 2022. Tr. at 105:8–9 (“[PLAINTIFFS:] [W]e walked away from that [tabling discussion] believing this [discovery] was a live and open issue.”). Finally, while this “data [may] not [have been] necessary for summary judgment . . . [it is] for class certification.”7 Tr. at 111:10–11 (plaintiffs); Clippinger v. State Farm Mut. Auto. Ins. Co., 2021 WL 1894821, at *2 (“[C]lass certification discovery is not relevant [at the summary judgment stage].”); Tr. at 111:8–11 (“[PLAINTIFFS]: Well, I think like in Clippinger, there is some wisdom to the concept that maybe all of that data is not necessary for summary judgment, but then becomes necessary for class certification.”). To that end, the government 7 To the extent the government relies on plaintiffs’ 13 April 2020 statement plaintiffs “will not need this information [pertaining to hospitals other than the six named plaintiffs] prior to resolving [p]laintiffs’ [M]otion for [C]lass [C]ertification,” the government overlooks the substantial change in circumstances discussed infra Section IV.B.1. See Def.’s Disc. Resp. at 12–13 (quoting 21 May 2020 JSR at 3–4, ECF No. 140). The government likewise ignores plaintiffs’ agreement to table these discovery requests temporarily in October 2020, at which time plaintiffs acknowledged they would eventually re-raise these requests, even if—at the time—the plan was to do so after class certification. See Tr. at 85:12–21, 55:5–6. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 12 of 23 - 13 - cannot “object[] to [plaintiffs’ document] request” in 2020 as “irrelevant and not proportional to the needs of the case insofar as plaintiffs seek . . . [information from] thousands of hospitals” only to now argue it is too late for this discovery and “plaintiffs [have] squandered . . . their allotted discovery periods . . . .” Gov’t’s Disc. Resp. at 11, 28; Tr. at 106:7–14 (“[PLAINTIFFS:] [I]t’s almost like the [g]overnment—they’re playing gotcha here. . . . [T]hey didn’t want to give us the information at the time [of discovery] and then they say, well, here’s summary judgment first and we can defer this until later . . . and now we’ve got a summary judgment opinion and now [they] say gotcha . . . .”). Nor can the government object to turning over the requested data in 2020 and now only to “use [plaintiffs’] lack of this data as a sword” come class certification. Tr. at 126:23–127:2. Indeed, “this has never been a case where” plaintiffs “said we’re not going to look at that [requested] data . . . [or] we’re not eventually going to be coming for that.” Tr. at 126:18–20. To the contrary, plaintiffs “requested this [data] during discovery,” and have long maintained this discovery “is the way to” “figure out . . . what are we dealing with” from a class perspective, including in the JSR filed after the Court’s November 2022 Summary Judgment Order, in which plaintiffs reserved the right to move for “additional class certification fact or expert discovery.” Tr. at 44:10, 56:21–22; 28 Dec. 2022 JSR at 2. By way of the government’s objection to plaintiffs’ February 2020 request and the parties’ tabling this request in October 2020, plaintiffs “even with the exercise of due diligence[,]” could not have obtained the requested information in a way sufficient to “meet the [Court’s discovery] timetable.” Slip Track Sys., Inc., 304 F.3d at 1270. Had they “received the data in 2020,” they “would have . . . run the DPP” for all potential class members as plaintiffs now request the opportunity to. Tr. at 113:12–15. Instead, plaintiffs did not have access to the data so continued to raise this request at all reasonably appropriate times. See Tr. at 112:8–9 (“[PLAINTIFFS:] [I]t was not possible for us to have done this [DPP] calculation without th[is] data.”). The Court accordingly finds plaintiffs were sufficiently diligent to justify a finding of good cause to reopen fact discovery as to plaintiffs’ document request for “critical data related to each Potential Class member hospital,” Pls.’ Disc. Mot. at 6. Slip Track Sys., Inc., 304 F.2d at 1270. 2. Prejudice In considering whether to reopen discovery, a trial court may consider, in addition to the requesting party’s diligence, “other relevant factors including, in particular, whether allowing the amendment . . . will prejudice [the opposing party].” High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244). Prejudice related to the reopening of discovery may involve the delay of proceedings. Wordtech Sys., Inc. v. Integrated Networks Sols., Inc., 609 F.3d 1308, 1322 (Fed. Cir. 2010) (“[A] need to reopen discovery and therefore delay the proceedings supports a district court’s finding of prejudice from a delayed motion to amend the complaint.” (quoting Lockheed Martin Corp. v. Network Sols., Inc., 194 F.3d 980, 986 (9th Cir. 1999))). Further, RCFC 26(b)(1) provides: [P]arties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case, considering the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 13 of 23 - 14 - burden or expense of the proposed discovery outweighs its likely benefit. RCFC 26(b)(1). “Questions of the scope and conduct of discovery are . . . committed to the discretion of the trial court.” Florsheim Shoe Co., Div. of Interco, Inc. v. United States, 744 F.2d 787, 797 (Fed. Cir. 1984). The government contends reopening discovery is prejudicial because “plaintiffs are proposing fact discovery on a scale never before undertaken in this case, a new expert report for the [g]overnment to then respond to, more expert depositions, and, no doubt, additional Daubert and class-related motions practice, resulting in substantial delay in bringing this matter to resolution.” Gov’t’s Disc. Resp. at 35–36 (footnote omitted). Plaintiffs, on the other hand, argue: (1) the sought after data “is . . . exclusively in [the government’s] possession”; and (2) their request will not prejudice the government because it will have an opportunity to oppose plaintiffs’ expert report. Pls.’ Disc. Reply at 3, 8. Even if there is any prejudice to the government, plaintiffs assert “the benefit to this case from the discovery would significantly outweigh any burden to the parties,” id. at 9, because of the assistance the discovery would provide the Court in ruling on class certification of the “narrower proposed class of plaintiffs,” Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 428, left after summary judgment. Id. at 5, 9–10 (first citing Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 428; and then citing Alta Wind I Owner Lessor C v. United States, 154 Fed. Cl. 204, 217 (2021)). Indeed, plaintiffs argue “produc[ing] the data . . . [now will be] more efficient [than production after certification] [a]s there will be less hypothetical back-and-forth between the parties [during certification briefing]” if the government’s data is available to all sides. Tr. at 118:2–6. Any prejudice could also be cured at trial by cross-examination of plaintiffs’ expert, plaintiffs contend. Pls.’ Disc. Reply at 10. The Court’s Summary Judgment Order indicated “the Court . . . needs further information regarding how plaintiffs in this post-summary judgment smaller class would meet the requirements for class certification” before deciding plaintiffs’ motion for class certification. Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 428. To that end, mirroring their requests during the 2020 discovery period, plaintiffs ask the government to provide “(1) all information Kennell used in the DPP calculations for each Potential Class member hospital, as well as the underlying calculations, and (2) all hospital outpatient claims data available for each of the Potential Class member hospitals during the relevant time period.” Pls.’ Disc. Mot. at 8–9 (emphasis added). Plaintiffs argue this discovery will “benefit . . . this case” by providing the radiology data needed to determine “who is in the [now-narrowed] class.” Pls.’ Disc. Reply at 2, 9 (“Plaintiffs’ damages are now limited to those claims involving errors in the [g]overnment’s data”); Tr. at 57:1. The government has not refuted this claim. Tr. at 132:16–25 (“THE COURT: Just to make sure I understand, can you just quickly articulate the prejudice to the [g]overnment [from the Motion to Compel the data]? . . . [THE GOVERNMENT:] The [prejudice from the] [M]otion to [C]ompel is a significant reasonableness and proportionality concern . . . .”); Tr. at 56:18–57:14 (“[PLAINTIFFS:] [W]e really followed the Court’s lead, looking at the summary judgment opinion saying . . . go back and figure out now what we are dealing with . . . [with respect to] who is in the class . . . only on the [g]overnment’s [data] . . . . [THE GOVERNMENT:] I firmly disagree with that [procedural move]. I think that [p]laintiffs are trying to jump their original expert report . . . [a]nd under the law, [they] can’t.”). Rather, the government’s primary prejudice-related allegation is plaintiffs’ request violates the Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 14 of 23 - 15 - “[r]easonableness and proportionality” tenants set forth in RCFC 26(b)(1) because “[t]he [g]overnment has already incurred substantial expense,” Def.’s Disc. Resp. at 36, and plaintiffs have “not established a right to discovery of [non-named plaintiff] hospitals . . . based on what they have shown.” Tr. at 130:10–14. As the Court noted above, the government cannot argue plaintiffs’ document discovery request was too early before summary judgment and too late now that the government has incurred greater expense in litigating this case. See supra Section IV.A.1. Neither party knew the substantial impact summary judgment would have on the trajectory of this case, but the parties agreed to table plaintiffs’ discovery requests until after the Court’s summary judgment decision. See Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 395; SC Tr. at 27:13–23. As evidenced by the recent data analysis performed by the government, after summary judgment, “[g]overnment data is required to evaluate which hospitals were affected by the [g]overnment’s breach of” contract. 25 Oct. 2023 JSR at 16. Plaintiffs cannot be expected to argue, and the Court cannot “rule on[,] numerosity [and related class certification factors] if there[ i]s no evidence regarding the approximate number of hospitals who would fit the . . . requirements allowed in the summary judgment order.” Tr. at 108:6–11. The parties must both have an opportunity to review the relevant data held by the government to determine which hospitals should, or should not, be included in the putative class.8 See id. The requested data, which includes the pertinent “outpatient claims data” and the information “used in the DPP calculations,” Pls.’ Disc. Mot. at 8–9, is therefore highly relevant to the next step in this case— class certification—and, rather than delay this case, having this data will enable the Court to decide plaintiffs’ motion for class certification more efficiently. Davita HealthCare Partners, Inc., 125 Fed. Cl. at 399 (“[T]he additional time . . . does not warrant the severe sanction of exclusion of data helpful to both parties in this litigation.”). To the extent the government argues the scale of the information requested is “grossly disproportionate to the needs of the case,” Tr. at 110:22, the government ignores: (1) plaintiffs’ and the Court’s substantial need to understand “who would be in the class” come time to brief and rule on class certification, Tr. at 55:24–25; and (2) the inability of plaintiffs and the Court to access this data “exclusively in [the government’s] possession” without production by the government, Pls.’ Disc. Reply at 3. See Davita HealthCare Partners, Inc., 125 Fed. Cl. at 399; RCFC 26(b)(1) (“[P]arties may obtain discovery regarding any nonprivileged matter that is relevant . . . and proportional . . . [considering] the parties’ relative access to [the] relevant information . . . .” (emphasis added)). The government likewise overlooks its ability to rebut any arguments plaintiffs make using this data both before and at trial. See Alta Wind I Owner Lessor C, 154 Fed. Cl. at 217 (“Other Court of Federal Claims judges have noted that providing the government an opportunity to file a rebuttal mitigates any prejudice that may have otherwise existed in providing plaintiff the opportunity to reopen the record.”). The documents plaintiffs request are therefore highly relevant and proportional to the needs of the case as they will provide plaintiffs and the Court 8 This is not a case where, as the government alleges, plaintiffs are “attempt[ing] to use discovery to find new clients upon learning of infirmities in the claims of putative class representatives.” Def.’s Disc. Resp. at 26–27 (first citing In re Williams-Sonoma, Inc., 947 F.3d 533, 540 (9th Cir. 2020); then citing Gawry v. Countrywide Home Loans, Inc., 395 F. App’x 152, 160 (6th Cir. 2010); Douglas v. Talk Am., Inc., 266 F.R.D. 464, 467 (C.D. Cal. 2010); Falcon v. Phillips Elec. N. Am. Corp., 304 F. App’x 896, 898 (2d Cir. 2008)). Rather, plaintiffs are requesting access to information held by the government to adequately brief class certification on behalf of the existing named plaintiffs and the putative class. See Pls.’ Disc. Mot. at 2 (“After completion of this discovery, [p]laintiffs would then file an amended motion for class certification.”). Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 15 of 23 - 16 - information necessary for a thorough analysis of class certification. Florsheim Shoe Co., Div. of Interco, Inc., 744 F.2d at 797; Davita HealthCare Partners, 125 Fed. Cl. at 399 (“[T]he additional time . . . does not warrant the severe sanction of exclusion of data helpful to both parties in this litigation.”); RCFC 26(b)(1). The Court accordingly finds any prejudice to the government caused by the scope of plaintiffs’ document request is mitigated by the benefit of the requested information to the efficient resolution of this case.9 See High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); Tr. at 118:2–6. The government will have ample opportunity to oppose any supplemental expert reports presented by plaintiffs using the requested data, including through cross-examination of plaintiffs’ experts at trial. See Alta Wind I Owner Lessor C, 154 Fed. Cl. at 217 (“Other Court of Federal Claims judges have noted that providing the government an opportunity to file a rebuttal mitigates any prejudice that may have otherwise existed in providing plaintiff the opportunity to reopen the record.”). The Court therefore finds plaintiffs were diligent in pursuing this document discovery request and the government will not experience prejudice sufficient to warrant denying plaintiffs’ Motion as to the request. The Court accordingly grants plaintiffs’ document discovery request as tailored, infra Section V, to the liability found in the Court’s November 2022 Summary Judgment Order, as there is good cause to do so. See High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); 6A CHARLES A. WRIGHT ET AL., FEDERAL PRACTICE AND PROCEDURE § 1522.2 (3d ed. 1998) (“What constitutes good cause sufficient to justify the modification of a scheduling order necessarily varies with the circumstances of each case.”); Pls.’ Disc. Mot. at 8–9. B. Plaintiffs’ Request to Depose a Government Corporate Designee Pursuant to Rule 30(b)(6) Plaintiffs also seek leave to “depose a [g]overnment corporate designee to identify which data sources were . . . available to the [g]overnment from the relevant time period, and where the relevant claims data resides.” Pls.’ Disc. Mot. at 8. Plaintiffs specify they are seeking “an hour . . . of deposition, just getting the [g]overnment to . . . confirm . . . the data sources” they have now and had during the relevant time periods “to make sure . . . there’s been no spoliation . . . .” Tr. at 117:22–25. In response, the government contends it previously identified an agency employee “as an individual with ‘discoverable information concerning TRICARE Encounter Data (TED), the DHA Military Health System Data Repository (MDR), and the creation, content and maintenance of records in both of those databases[,]’. . . [but] plaintiffs expressly declined a deposition during the established periods of fact and expert discovery[] and elected instead to proceed through limited interrogatories.” Defs.’ Disc. Resp. at 33. The government alleges “[p]laintiffs cannot reasonably be said to have been diligent in pursuing the 9 The Court emphasizes the government alone is in possession of the TMA data potentially comprising “tens of millions of records.” Tr. at 134:3. As such, the government is the only party capable of sorting and producing the large volumes of information. See RCFC 26(b)(1) (“[P]arties may obtain discovery regarding any nonprivileged matter that is relevant . . . and proportional . . . [considering] the parties’ relative access to [the] relevant information.”). Indeed, at the 19 December 2023 status conference, the government agreed it is capable of reviewing all data in its possession to identify line items of putative class members missed during DPP extraction due to issues akin to those impacting twelve out of the thirteen unextracted line items for Integris Baptist and Integris Bass Baptist. See 25 Oct. 2023 JSR; see also Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 409, 412 (2022). Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 16 of 23 - 17 - deposition that they now request when they intentionally eschewed [an offered deposition] during the established period of fact discovery.” Id. The government also reasserts its prejudice and diligence-related arguments discussed supra Section IV.A.1–2. See, e.g., id. at 28 (“Plaintiffs make no claim that it was impossible during two separate discovery periods for them to have served on the [g]overnment . . . the deposition notice . . . because it obviously was not.”); Tr. at 130:8–10 (“THE COURT: . . . So what’s the prejudice though? [THE GOVERNMENT]: Reasonableness and proportionality.”). 1. Diligence The government’s only novel diligence argument related to plaintiffs’ deposition request is plaintiffs previously declined an opportunity to depose an “an individual with ‘discoverable information concerning [TED and MDR], and the creation, content and maintenance of records in both of those databases.” Defs.’ Disc. Mot. Resp. at 33. The government otherwise broadly asserts plaintiffs were not diligent in pursuing their discovery requests. See, e.g., id. at 28. As determined supra Section IV.A.1, plaintiffs were diligent with respect to pursuing the government’s data and related information at the appropriate time during discovery. See, e.g., App. to Pls.’ Disc. Mot. at 23–24. The Court therefore only addresses the government’s argument related to previous deposition opportunities below. A “trial court ‘has wide discretion in setting the limits of discovery.’” Schism, 316 F.3d at 1300 (quoting Moore, 927 F.2d at 1197). Notwithstanding, modification of a court-imposed schedule may be done “only for good cause and with the judge’s consent.” RCFC 16(b)(4). “When assessing whether good cause has been shown, ‘the primary consideration is whether the moving party can demonstrate diligence.’” High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244). The government’s primary contention—plaintiffs were not diligent in pursuing the requested deposition because they turned down an offer to depose a government employee in May 2019—assumes a party cannot be diligent if they have, at any time in the past, “eschewed [similar discovery.]” Defs.’ Disc. Mot. Resp. at 33. Over the past four and a half years, however, this case has changed substantially. See Alta Wind I Owner Lessor C, 154 Fed. Cl. at 227 (granting additional discovery upon remand and reassignment of the case); see also Geneva Pharms. Tech. Corp., 2005 WL 2132438, at *5 (“[M]aterial events have occurred since the last discovery period, which justice requires that the parties have an opportunity to develop through discovery.”). As noted by plaintiffs, “[t]he Court’s ruling on the [g]overnment’s Summary Judgment Motion . . . fundamentally altered the scope of this case,” Pls.’ Disc. Mot. at 8, by substantially narrowing the potential class members and limiting plaintiffs’ “damages . . . to [two] claims involving errors in the [g]overnment’s data,” Pls.’ Disc. Reply at 3. “[T]o analyze the extent of the . . . error[s]” in the government’s data, 25 Oct. 2023 JSR at 16, and perform “a more accurate damages calculation” for the putative class members, Pls.’ Reply at 7, plaintiffs therefore need to understand the data sources available to the government now and at the time of line item extraction. See 25 Oct. 2023 JSR at 17 (“The only way to evaluate whether Mr. Kennell failed to extract all relevant data . . . for the entire class is for the [g]overnment to produce . . . [the discovery] [p]laintiffs seek.”); Pls.’ Disc. Reply at 7. In 2020, in contrast, at which time plaintiffs “elected . . . to proceed through limited interrogatories” rather than conduct the government’s offered deposition, the Court had not yet narrowed the scope of the case or limited the damages calculations to the government’s data. Defs.’ Disc. Resp. at 33. During the Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 17 of 23 - 18 - initial discovery periods, plaintiffs still reasonably believed their own data might be relevant and did not yet understand the importance of the government’s data. See Pls.’ Disc. Reply at 3; see also 25 Oct. 2023 JSR at 16. Plaintiffs therefore did not exhibit a lack of diligence by not accepting the government’s offer to depose an individual whose testimony, at the time, was less relevant to the case. The government has accordingly failed to produce evidence sufficient to show plaintiffs were not diligent in pursuing the requested deposition. Schism, 316 F.3d at 1300; High Point Design LLC, 730 F.3d at 1319; see also Alta Wind I Owner Lessor C, 154 Fed. Cl. at 227. 2. Prejudice As noted supra Section IV.A.2, courts considering requests to reopen discovery may consider whether and to what extent granting the request will prejudice the opposing party, including via delaying the litigation. High Point Design, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); Wordtech Sys., 609 F.3d at 1322 (quoting Lockheed Martin Corp., 194 F.3d at 986). Regarding plaintiffs’ deposition request, the government argues granting plaintiffs’ deposition request will, like plaintiffs’ document requests, result in additional expense and “substantial delay in bringing this matter to resolution.” Def.’s Disc. Resp. at 36. Plaintiffs indicated at oral argument, however, the requested deposition will be “an hour,” with the goal being simply to understand “the data sources” in the government’s possession. Tr. at 117:22. To the extent this short deposition of a government employee, which the government was prepared to allow for several years ago, will allow the case to proceed “more efficient[ly]” to class certification with fewer “hypothetical back-and-forth[s] between the parties” related to considerations like numerosity, see Tr. at 118:2–6, the Court finds the minimal potential prejudice to the government from this deposition is outweighed by the value of this information to the later stages of this litigation. Davita HealthCare Partners, Inc., 125 Fed. Cl. at 399 (“[T]he additional time . . . does not warrant the severe sanction of exclusion of data helpful to both parties in this litigation.”). The Court therefore does not find the government’s argument regarding diligence or prejudice persuasive with respect to plaintiffs’ deposition request. The Court accordingly grants this request as there is good cause to do so.10 High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); 6A WRIGHT ET AL., FEDERAL PRACTICE AND PROCEDURE § 1522.2 (3d ed. 1998) (“What constitutes good cause sufficient to justify the modification of a scheduling order necessarily varies with the circumstances of each case.”). C. Supplemental Expert Report Plaintiffs finally request leave to “serve a supplemental expert report on . . . relevant class 10 To the extent the government intended its arguments related to proportionality and relevance to apply to plaintiffs’ deposition request, the Court is unpersuaded. See Def.’s Disc. Resp. at 36. A single deposition lasting approximately one hour on subject matter on which the government previously offered to permit a deposition is not disproportionate to the needs of this case. Schism v. United States, 316 F.3d 1259, 1300 (Fed. Cir. 2002) (quoting Moore v. Armour Pharm. Co., 927 F.2d 1194, 1197 (11th Cir. 1991)); RCFC 26(b)(1). Likewise, the subject matter—the sources of the data plaintiffs request access to—is highly relevant in ensuring a complete and accurate data set free of spoliation. Schism, 316 F.3d at 1300 (quoting Moore, 927 F.2d at 1197); RCFC 26(b)(1); see supra Section IV.A.2. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 18 of 23 - 19 - issues” upon completion of the above-requested discovery. Pls.’ Disc. Mot. at 2. Specifically, plaintiffs wish to “submit a supplemental expert report analyzing the [government’s] data and applying the DPP methodology to the correct universe of outpatient radiology line items . . . .” Id. at 10; see also Pls.’ Disc. Reply at 3 (“Plaintiffs’ supplemental expert report would identify the scope of the class, as requested by the Court.”); Tr. at 73:10–14 (“[PLAINTIFFS:] [I]t is a very complex formula. And I think that it is something that . . . you would want someone with experience with these data line items going through and doing it . . . it’s [objective] math. . . . It’s essentially a claims administrator.”). Plaintiffs make clear their initial expert report was an attempt at extrapolating the named plaintiffs’ data “across the class to come up with . . . estimated number[s],” which they now wish to update with “the exact numbers” once they receive the government’s data. Tr. at 69:4–16. Plaintiffs contend “[r]eopening discovery is warranted where supplemental information from an expert would assist the Court in resolving important issues . . . [s]uch [as] . . . ‘presenting the Court with a more accurate representation of plaintiffs’ damages allegations.’” Pls.’ Disc. Reply at 6 (first citing Kennedy Heights Apartments Ltd. I, 2005 WL 6112633, at *3–4; and then quoting Alta Wind I Owner Lessor C, 154 Fed. Cl. at 217). Likening this case to Alta Wind, plaintiffs argue the Court should conclude here an “expert report will provide the Court with a damages estimate more accurately reflecting plaintiffs’ damages position [in light of the changes to the case rendered by summary judgment] . . . and therefore will likely assist the Court.” Id. at 7 (quoting Alta Wind, 154 Fed. Cl. at 216); Tr. at 68:17–69:16 (“[PLAINTIFFS]: With respect to Ms. Jerzak and the breach of contract, she did two things [in her report.] . . . One, she compared the hospital line items to the government line items for the named [p]laintiffs and did a straight objective calculation of what was the difference. . . . She also took those numbers and extrapolated them across the class to come up with an estimated number. THE COURT: A hypothetical. [PLAINTIFFS]: Yes . . . [r]ecognizing that if the class was certified . . . we’d have to do the exact numbers.”). Plaintiffs conclude this report will “not prejudice the [g]overnment in any way, and would actually benefit the [g]overnment” by providing an “opportunity . . . to oppose” additional contentions appropriate to the posture of the case. Id. at 8 (emphasis omitted) (citing Alta Wind I Owner Lessor C, 154 Fed. Cl. at 216). Plaintiffs note, however, “in [their] mind, this [report] is something that always was going to happen after certification” at the merits stage, Tr. at 73:15– 16 (emphasis added), as they do not “need an expert report for class certification because” the government “admitted breach,” Tr. at 96:3–4; Tr. at 63:22–64:6 (“[PLAINTIFFS:] [L]et’s say the Court certified a class here. The next step . . . is for merits. Someone is going to have to spit out a report saying here are the class members and when I run their . . . data . . . here are the differences and here’s the number that gets spit out.” (emphasis added)). The government reiterates its diligence and prejudice arguments discussed supra Sections IV.A–B with respect to plaintiffs’ request for leave to file a supplemental expert report. The government likewise refutes the notion plaintiffs’ current expert report is a “placeholder . . . that was[] [not] really meant to be real.” Tr. at 70:19–20. In other words, the government contends plaintiffs “meant th[eir earlier] expert report” to apply to “their currently pending motion for class cert[ification],” Tr. at 71:21–23, and now “seek to have the Court rescue them from their own litigation choices,” including the choice to file “expert damages models [that] could never be used to measure class damages.” Def.’s Disc. Resp. at 16–17. Plaintiffs should not be permitted to file a new expert report, according to the government, simply because “they have not . . . marshaled any legally cognizable expert evidence concerning the few claims that remain” Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 19 of 23 - 20 - after summary judgment. Def.’s Disc. Resp. at 17. To the extent plaintiffs concede the requested expert report “is for [the] merits” stage and not necessary for class certification, however, the government believes “a class cannot be certified without a viable expert damages methodology meeting the requirements of Comcast,” meaning plaintiffs’ pending motion for class certification automatically fails because “the only expert evidence in the record that bears on the two types of breaches found by the Court is . . . offered by the [g]overnment.” Id. at 22– 23 (citing Comcast Corp. v. Behrend, 569 U.S. 27, 33–34 (2013)). Indeed, according to the government, “plaintiffs are left with no expert model at all as to the few remaining contract claims,” meaning they cannot adequately allege “damages are capable of measurement on a class[-]wide basis” as required by Comcast. Id. at 24 (quoting Comcast, 569 U.S. at 34). Concerning plaintiffs’ request for leave to file an expert report, the government broadly asserts plaintiffs were not diligent in pursuing their discovery requests. See, e.g., Def.’s Disc. Resp. at 28. As determined supra Section IV.A.1, however, plaintiffs were diligent with respect to pursuing the requested discovery generally. Plaintiffs requested the relevant data in February and July 2020 and planned to replace “the extrapolation” present in their earlier expert reports with analysis “using actual data” upon completion of this requested discovery. See supra Section IV.A.1; Tr. at 136:15–23. The Court’s November 2022 Summary Judgement Order narrowed the scope of this case and further highlighted the need for this additional discovery related to the remaining issues and potential class members. See supra Section IV.A.1, B; Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 427. Further, to the extent the government alleges plaintiffs requested expert report is prejudicial, the government will have sufficient time and opportunity to rebut any supplemental expert report filed by plaintiffs. See supra Section IV.A.2, B.2; Alta Wind I Owner Lessor C, 154 Fed. Cl. at 217 (“Other Court of Federal Claims judges have noted that providing the government an opportunity to file a rebuttal mitigates any prejudice that may have otherwise existed in providing plaintiff the opportunity to reopen the record.”). The contemplated expert report, which will perform the DPP analysis for outpatient radiology claims data within the scope of the Court’s November 2022 liability findings for each putative class member hospital using “only the [government’s] data” as required by the Court’s Summary Judgment Order, could also aid the Court at the merits stage in determining “the amount[] that each hospital is owed.” Tr. at 78:7–15. The requested report therefore would likely not be prejudicial to the government to such an extent as to “warrant the severe sanction of exclusion of [useful] data.” Davita HealthCare Partners, Inc., 125 Fed. Cl. at 399; High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); 6A CHARLES A. WRIGHT ET AL., FEDERAL PRACTICE AND PROCEDURE § 1522.2 (3d ed. 1998) (“What constitutes good cause sufficient to justify the modification of a scheduling order necessarily varies with the circumstances of each case.”). Plaintiffs acknowledge, however, the updated calculations they plan to include in their requested expert report are not necessary until “after [class] certification”—at the merits stage. Tr. at 73:15–16. At oral argument, plaintiffs clearly stated they do not “need an expert report for class certification,” which is the next step in this litigation. Tr. at 96:3–4. To the extent the government argues plaintiffs’ certification motion will necessarily fail because plaintiffs lack evidence “damages are [measurable] . . . on a class[-]wide basis” in response to this statement by plaintiffs, Def.’s Disc. Rep. at 23 (quoting Comcast, 569 U.S. at 34), plaintiffs respond the DPP Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 20 of 23 - 21 - is the requisite means of “calculat[ing] damages for every single class member,” Tr. at 136:4–5. While the Court reserves judgment as to plaintiffs’ class certification motion, plaintiffs’ argument the DPP provides their model for calculating damages on a class-wide basis because it is a uniform model applicable to all putative class members is sufficient to suggest plaintiffs need not fully calculate alleged damages in a supplemental expert report at this time. Tr. at 54:9–15 (“[PLAINTIFFS:] I think the type of cases that [the government] is talking about [like Comcast] where there’s been [a failure by the plaintiffs to actually address the calculation of class-wide damages, are inapposite because] we haven’t offered a model that is deviating from the contract. What we’re saying . . . the experts are going to . . . essentially crunch[ the] numbers [using the DPP].”). Plaintiffs can do so if and when the merits of this case are argued at trial. This is not a case like Comcast, in which the plaintiffs presented to the court “a methodology that identifies damages that are not the result of the wrong” at issue. Comcast, 569 U.S. at 37. Here, in contrast, the parties indicated at oral argument plaintiffs’ proffered DPP methodology from the parties’ DPP Contracts appears capable of calculating damages for all potential class members. Tr. at 136:1–5 (“[PLAINTIFFS:] But what I will tell you that we’re going to do with the data is we are going to have the auditor [i.e., the expert] plug [the government’s] data into the DPP. That is the model. That is [what] the contract . . . dictates . . . how you calculate damages for every single class member.”); Tr. at 54:12–13 (“[PLAINTIFFS:] [W]e haven’t offered a model that is deviating from the contract.”); Tr. at 93:2–5 (“THE COURT: But the model is just what you said is—if I understood correctly, is that the report is just DPP data discrepancy output. [THE GOVERNMENT]: For each individual [p]laintiff.”); see Tr. 93:2–95:25. The Court accordingly denies plaintiffs’ request for an expert report without prejudice in the interest of the efficient disposition of plaintiffs’ class certification motion. High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); 6A CHARLES A. WRIGHT ET AL., FEDERAL PRACTICE AND PROCEDURE § 1522.2 (3d ed. 1998) (“What constitutes good cause sufficient to justify the modification of a scheduling order necessarily varies with the circumstances of each case.”). To the extent plaintiffs “would want in [the] merits” stage an expert report from an “auditor to make sure” the parties “all agree on” damages calculated via the DPP, plaintiffs may refile this motion at that time. Tr. at 96:12–13 (plaintiffs). V. Scope of Granted Discovery and Next Steps As discussed supra Section IV: 1. The Court grants plaintiffs’ deposition request. 2. The Court grants plaintiffs’ document requests as follows: Plaintiffs are permitted to serve amended document discovery requests for all putative class member hospitals tailored to seek only those documents required for plaintiffs to identify “breach[es] of TMA’s [contractual] duty” under the DPP Contract akin to either: (1) the government’s failure to extract “thirteen line items for Integris Baptist and Integris Bass Baptist”; or (2) the government’s failure to adjust “five . . . line items” for Integris Baptist “during the DPP because of an alternate zip code.” See Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 409, 412 (2022). This specification ensures plaintiffs’ requests remain within the scope of the Court’s findings of liability in November 2022. Id. The Court notes at the 19 December 2023 status conference the government agreed it is possible to Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 21 of 23 - 22 - execute the same analysis performed on the named plaintiffs’ data in the 25 October 2023 JSR on the government’s data for all putative class members. 11 3. The Court denies plaintiffs’ request to file a supplemental expert report without prejudice. Plaintiffs may move to file an updated expert report later in this litigation as necessary, at which time the government will be permitted to file a response report. Within three weeks of the date this Order is issued, the parties shall file a JSR comprised of the following: 1. Plaintiffs’ discovery requests revised in accordance with the above clarifications; 2. The parties’ proposed schedule for discovery, including a timeline for plaintiffs’ deposition and the exchange of documents between the parties; and 3. The parties’ proposed schedule for re-briefing class certification after all discovery closes, including a proposed timeline for the filing of new expert reports. As noted by the Court at the 19 December status conference, plaintiffs’ next step should be to analyze the government’s data for the six named plaintiffs already in plaintiffs’ possession to assist plaintiffs in tailoring their document requests as discussed above. Further, at the 19 December 2023 status conference, the parties agreed the partial grant of plaintiffs’ Discovery Motion moots plaintiffs’ pending Motion to Certify Class Action and Appoint Class Counsel, ECF No. 146, as the parties will need to re-brief these issues following the narrowing of this case on summary judgment and the upcoming additional discovery. The government agreed its pending Motion to Exclude Inadmissible Evidence Relied Upon in Plaintiffs’ Motion for Class Certification, ECF No. 204, is accordingly moot. The government may refile a similar motion if needed during future class certification briefing. Plaintiffs likewise agreed to withdraw without prejudice their Motion to Exclude the Expert Opinions and Continued Participation of Kennell, ECF No. 251, pending further discovery and briefing. Further, plaintiffs agreed, given the scope of this case after summary judgment, the expert report of Fay is moot. Accordingly, the government’s Motion to Exclude the Expert Opinions of Fay, ECF No. 206, is moot. Finally, plaintiffs stated they plan to file a new expert report replacing that of Jerzak later in this litigation. The government noted at the 19 December status conference plaintiffs’ replacement of Ms. Jerzak’s current report will render the government’s Motion to Exclude the Expert Opinions of Jerzak, ECF No. 205, moot as well. 11 As discussed supra note 4, in the 25 October 2023 JSR, the government explained why twelve of the thirteen line items improperly excluded for Integris Baptist and Integris Bass Baptist were not extracted. At the 19 December 2023 status conference, the government indicated it can now search its database for line items improperly excluded due to this same error for all hospitals that participated in the DPP. The government noted, however, it is not aware of what caused the thirteenth line item to be missed so cannot create search criteria appropriate to identifying other similar misses. Finally, to identify missed alternate zip codes, the government stated it would need zip code information from plaintiffs and the putative class members. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 22 of 23 - 23 - VI. Conclusion For the foregoing reasons, and as specified supra Section V, the Court GRANTS-INPART and DENIES-IN-PART plaintiffs’ Motion for Leave to Conduct Certain Limited Additional Discovery and to Submit Supplemental Expert Report, ECF No. 269, and FINDS as MOOT plaintiffs’ Motion for Clarification or, in the Alternative, to Compel Production, ECF No. 161.12 As noted supra Section V, the Court FINDS as MOOT plaintiffs’ Motion to Certify Class Action and Appoint Class Counsel, ECF No. 146, the government’s Motion to Exclude Inadmissible Evidence Relied Upon in Plaintiffs’ Motion for Class Certification, ECF No. 204, the government’s Motion to Exclude the Expert Opinions of Fay, ECF No. 206, and the government’s Motion to Exclude the Expert Opinions of Jerzak, ECF No. 205. As agreed to at the 19 December 2023 status conference, plaintiffs SHALL WITHDRAW their Motion to Exclude the Expert Opinions and Continued Participation of Kennell, ECF No. 251, without prejudice. Finally, as noted at oral argument, see Tr. at 139:10–140:8, the Court STRIKES the government’s Notice of Additional Authority, ECF No. 273, as deficient and GRANTS the government’s Unopposed Motion for Leave to File Notice of Supplemental Authority, ECF No. 274, for good cause shown. The parties SHALL FILE the joint status report discussed supra Section V on or before 23 January 2024. IT IS SO ORDERED. s/ Holte HOLTE Judge 12 At oral argument, the parties agreed the Court ruling on plaintiffs’ current Discovery Motion is also a “ruling on [plaintiffs’ previous Motion to Compel,] ECF [No.] 161.” Tr. at 139:2–9. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 | You can only respond with the information in the context block. Please give your response in a simple tone that could be shared with a non-legal audience and easily understood.
Please summarize the determinations made in the text provided and explain the consequences of the rulings made.
In the United States Court of Federal Claims No. 13-821 (Filed: 2 January 2024) *************************************** INGHAM REG’L MEDICAL CENTER, * n/k/a MCLAREN GREATER LANSING, * et al., * * Plaintiffs, * * v. * * THE UNITED STATES, * * Defendant. * * Plaintiffs are six hospitals purporting to represent a class of approximately 1,610 hospitals across the United States in a suit requesting, among other things, the Court interpret what the Federal Circuit has deemed an “extremely strange” contract.1 This contract arose when hospitals complained the government underpaid reimbursements for Department of Defense Military Health System, TRICARE, outpatient services rendered between 2003 and 2009. In 2011, after completion of a data analysis, the government voluntarily entered a discretionary payment process contract with plaintiffs and offered net adjusted payments. In November 2022, after nine years of litigation and one Federal Circuit appeal, the Court granted in part and denied in part the government’s Motion for Summary Judgment. As the only surviving breach of contract claims concern the government’s duty to extract, analyze, and adjust line items from its 1 9 June 2022 Oral Arg. Tr. at 161:7–13, ECF No. 259 (“THE COURT: So the Federal Circuit panel, when the case was argued, characterized this agreement as extremely strange. [THE GOVERNMENT]: That is accurate. It is extremely strange. THE COURT: It is extremely strange? [THE GOVERNMENT]: It is.”). Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 1 of 23 - 2 - database, the Court required the parties to file a joint status report regarding the effect of summary judgment on plaintiffs’ Renewed Motion to Certify a Class Action. Following a status conference, plaintiffs filed a discovery motion related to class certification. For the following reasons, the Court grants-in-part and denies-in-part plaintiffs’ Motion. I. Background A. Factual and Procedural History2 TRICARE is a “military health care system” which “provides medical and dental care for current and former members of the military and their dependents.” Ingham Reg’l Med. Ctr. v. United States, 874 F.3d 1341, 1342 (Fed. Cir. 2017). TRICARE Management Activity (TMA), a “field office in the Defense Department [(DoD)],” managed the TRICARE system.3 N. Mich. Hosps., Inc. v. Health Net Fed. Servs., LLC, 344 F. App’x 731, 734 (3d Cir. 2009). In 2001, Congress amended the TRICARE statute to require DoD to follow Medicare rules when reimbursing outside healthcare providers. Ingham Reg’l Med. Ctr., 874 F.3d at 1343 (citing 10 U.S.C. § 1079(j)(2) (2002)). To facilitate transition to Medicare rules, in 2005, DoD issued a Final Rule which specified “[f]or most outpatient services, hospitals would receive payments ‘based on the TRICARE-allowable cost method in effect for professional providers or the [Civilian Health and Medical Program of the Uniformed Services] (CHAMPUS) Maximum Allowable Charge (CMAC).’” Id. (quoting TRICARE; Sub-Acute Care Program; Uniform Skilled Nursing Facility Benefit; Home Health Care Benefit; Adopting Medicare Payment Methods for Skilled Nursing Facilities and Home Health Care Providers, 70 Fed. Reg. 61368, 61371 (Oct. 24, 2005) (codified as amended at 32 C.F.R. § 199)). The TRICARE-allowable cost method “applied until 2009, when TRICARE introduced a new payment system for hospital outpatient services that was similar to the Medicare [Outpatient Prospective Payment System (OPPS)].” Id. In response to hospital complaints of payment issues, TRICARE hired Kennell and Associates, a consulting firm, to “undertake a study [(‘Kennell study’)] of the accuracy of its payments to the hospitals.” Ingham Reg’l Med. Ctr., 874 F.3d at 1343–44. The Kennell study “compared CMAC payments to the payments that would have been made using Medicare payment principles, and determined that DoD ‘(1) underpaid hospitals for outpatient radiology but, (2) correctly paid hospitals for all other outpatient services.’” Id. at 1344 (emphasis omitted) (citation omitted). From the Kennell study findings, “DoD created a discretionary payment process [(DPP)],” and, on 25 April 2011, DoD notified hospitals by letter of the process for them to “request a review of their TRICARE reimbursements (the ‘Letter’)” and “published a document titled ‘NOTICE TO HOSPITALS OF POTENTIAL ADJUSTMENT TO PAST PAYMENTS FOR OUTPATIENT RADIOLOGY SERVICES’ (the ‘Notice’)” on the TRICARE website. Id.; App. to Def.’s MSJ at A3–A9, ECF No. 203-1. The Notice described a nine-step methodology to “govern the review of payments for hospital outpatient radiology services and [the] payment 2 The factual and procedural history in this Order contains only those facts pertinent to plaintiffs’ Motion for Discovery, ECF No. 269. 3 The Defense Health Agency now manages activities previously managed by TMA. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 2 of 23 - 3 - of any discretionary net adjustments” by which hospitals could “request an analysis of their claims data for possible discretionary adjustment.” App. to Def.’s MSJ at A7. On 21 October 2013, plaintiffs brought this action claiming the government underpaid them for certain outpatient medical services they provided between 1 August 2003 and 1 May 2009. See Ingham Reg’l Med. Ctr. v United States, 126 Fed. Cl. 1, 9 (2016), aff’d in part, rev’d in part, 874 F.3d 1341 (Fed. Cir. 2017). Plaintiffs allege the approximately six years of underpayment breached two contracts and violated various statutory and regulatory provisions. Id. Plaintiffs estimate several thousand hospitals submitted requests for discretionary payment, including the six named plaintiffs in this case. See id. at 16. Plaintiffs therefore seek to represent a class of as many as 1,610 similarly situated hospitals. See Pls.’ Mem. in Supp. of Mot. to Certify at 1, ECF No. 77; see also Mot. to Certify, ECF No. 76. On 11 February 2020, during the parties’ second discovery period, plaintiffs requested from the government “[a]ny and all data concerning hospital outpatient services claims and TRICARE reimbursement for hospital outpatient services claims during the relevant time period . . . .” See App. to Pls.’ Disc. Mot. at 23, ECF No. 269; Gov’t’s Disc. Resp. at 11, ECF No. 270. The government rejected this request for records from “thousands of hospitals . . . that are not [named] plaintiffs” on 16 March 2020 and instead only “produce[d] the data requested for the six plaintiffs in this lawsuit.” App. to Pls.’ Disc. Mot. at 29. Plaintiffs filed a motion to clarify the case schedule or, in the alternative, to compel discovery of “data and documents relating to the [g]overnment’s calculation of payments under the [DPP] for all putative class members, not just the named [p]laintiffs” on 31 July 2020, the last day of discovery. See Pls.’ Mot. to Compel (“Pl.’s MTC”) at 2, ECF No. 161 (emphasis added). In response, the government stated, “[t]here is no basis for the Court to . . . compel extraneous discovery of hospitals that are not now in this lawsuit.” Def.’s Resp. to Pl.’s MTC (“Def.’s MTC Resp.”) at 2, ECF No. 166. During a status conference on 13 October 2020, the parties agreed to table plaintiffs’ discovery request and associated Motion to Compel pending resolution of the government’s then-pending Motion for Reconsideration, ECF No. 150, and any additional potentially dispositive motions. See 13 Oct. 2020 Tr. (“SC Tr.”) at 27:13–28:9, ECF No. 178 (“THE COURT: . . . So to state differently, then, [plaintiffs agree] to stay consideration of this particular [discovery] issue until class certification is decided? [PLAINTIFFS:] Yes, that would be fine. THE COURT: . . . [W]ould the [g]overnment agree with that? [THE GOVERNMENT:] Yes, [y]our [h]onor . . . [but] the [g]overnment still intends to file a motion for summary judgment. . . . THE COURT: Okay. So on the [g]overnment’s motion for summary judgment . . . that should probably not be filed until at least after the motion for reconsideration is resolved? [THE GOVERNMENT:] That’s correct.”). On 5 June 2020, plaintiffs filed a renewed motion to certify a class and appoint class counsel (“Pls.’ Class Cert.”), ECF No. 146, which the parties fully briefed. See Def.’s Class Cert. Resp., ECF No. 207; Pls.’ Class Cert. Reply, EF No. 226. On 26 August 2021, the government filed a motion for summary judgment (“Def.’s MSJ”), ECF No. 203. Plaintiffs filed an opposition to the government’s motion for summary judgment on 4 February 2022 (“Pls.’ MSJ Resp.”), ECF No. 225, and on 11 March 2022, the government filed a reply (“Def.’s MSJ Reply”), ECF No. 234. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 3 of 23 - 4 - “The Court [granted] the government’s [M]otion for [S]ummary [J]udgment as to plaintiffs’ hospital-data duty and mutual mistake of fact claims but [denied] the government’s [M]otion as to plaintiffs’ TMA-data duty and alternate zip code claims[,] . . . [and stayed] the evidentiary motions” on 28 November 2022. Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 428 (2022). The Court, deeming the government’s settlement arrangements with plaintiffs to be contracts (the “DPP Contracts”), specifically found “the DPP Contract[s] only obligated TMA to use its data, not the hospitals’ data,” leaving the government’s data as the only set relevant to this case. Id. at 427. The Court held the government’s (1) “failure to extract thirteen line items [meeting all qualifications for extraction] for Integris Baptist and Integris Bass Baptist”; and (2) failure to adjust “five . . . line items [for Integris Baptist] during the DPP because of an alternate zip code” constituted breach of the DPP Contracts. Id. at 409–10, 412. “Based on the summary judgment holding . . . the Court [found it needed] further information regarding how plaintiffs in this post-summary judgment smaller class would meet the requirements for class certification.” Id. “The Court accordingly decline[d] to rule on plaintiffs’ class certification motion . . . [a]s the only surviving claims are breach of contract for failure to follow the DPP in a few limited circumstances, [and] the parties did not adequately brief the narrower proposed class of plaintiffs arising under the remaining claims.” Id. The Court ordered the parties to file “a joint status report [(JSR)] providing the parties’ views on class certification for the smaller class of plaintiffs affected by the government’s breach of contract for failure to follow the DPP in limited circumstances and on whether further briefing is necessary.” Id. On 28 December 2022, the parties filed a JSR providing their opposing positions on whether plaintiffs can request further discovery related to class certification: “plaintiffs expressly reserve, and do not waive, any rights that they may currently have, or may have in the future, with respect to additional class certification fact or expert discovery”; and “the [g]overnment opposes any further fact or expert discovery in connection with plaintiffs’ amended/supplemental motion for class certification, and, in agreeing to the foregoing briefing schedule, is not agreeing to any further fact or expert discovery in this case.” 28 Dec. 2022 JSR at 2, ECF No. 262. Plaintiffs then filed a motion requesting leave to conduct further discovery and submit a supplemental expert report on 21 March 2023 (“plaintiffs’ Discovery Motion”). Pls.’ Disc. Mot., ECF No. 269. The government filed a response on 21 April 2023. Gov’t’s Disc. Resp. Plaintiffs filed a reply on 9 May 2023. Pls.’ Disc. Reply, ECF No. 271. The Court held oral argument on 19 July 2023. See 5 June 2023 Order, ECF No. 272; 19 July 2023 Oral Arg. Tr. (“Tr.”), ECF No. 276. On 31 August 2023, following oral argument on plaintiffs’ Discovery Motion, the government filed an unopposed motion to stay the case for the government to complete a “second look at the records [analyzed] . . . in the July 2019 expert report of Kennell . . . that were the subject of one of the Court’s liability rulings on summary judgment.” Def.’s Mot. to Stay at 1, ECF No. 277. The Court granted this Motion on the same day. Order, ECF No. 278. On 25 October 2023, the parties filed a JSR, ECF No. 284, in which the government addressed its findings4 and “proposed [a] way forward” in this case. 25 Oct. 2023 JSR at 2. In 4 In the 25 October 2023 JSR, the government explained twelve of the thirteen line items the government failed to extract for Integris Baptist and Integris Bass Baptist, see Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 412 (2022), were missed due to a “now-known” error in which “a very small set of patients comprised of military spouses . . . under age 65” were overlooked because they “receive Medicare Part A” but not Medicare Part B, Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 4 of 23 - 5 - response to the government’s data analysis, plaintiffs noted in the JSR “the [g]overnment’s update makes clear that [additional g]overnment data is required to evaluate which hospitals were affected by the [g]overnment’s breach of the DPP.” Id. at 16. Plaintiffs then likewise “[p]roposed [n]ext [s]teps” in this case, beginning with resolution of their Discovery Motion. Id. at 18. On 19 December 2023, the Court held a telephonic status conference to understand the technical aspects of plaintiffs’ discovery requests as they relate to the DPP process and algorithm. See Scheduling Order, ECF No. 285. B. Discovery Requests at Issue Plaintiffs seek leave to perform additional discovery stemming from the Court’s summary judgment holding “TMA [breached its duty] . . . to extract, analyze, and adjust radiology data from its database” by failing to (1) adjust “five . . . line items [for Integris Baptist] during the DPP because of an alternate zip code” and (2) “extract . . . thirteen line items [meeting the criteria for extraction] for Integris Baptist and Integris Bass Baptist.” Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 409–10, 412. Plaintiffs’ sought-after discovery includes a request for “the same data for the [putative] class hospitals” as plaintiffs currently “have [for the] six named [p]laintiffs,” Tr. at 50:14–19, to assist plaintiffs in identifying “line items in [TMA’s radiology data] . . . that met the [DPP C]ontract criteria but were excluded from the adjustment . . . .” Gov’t’s Disc. Resp. at 15. In all, plaintiffs “seek leave to (1) depose a [g]overnment corporate designee, (2) serve document requests, and (3) thereafter serve a supplemental expert report on the relevant class issues.” Pls.’ Disc. Mot. at 2. Plaintiffs further detail the purpose of each request: First, [p]laintiffs seek leave to depose a [g]overnment corporate designee to identify the various data sources in the [g]overnment’s possession from the relevant time period. Second, [p]laintiffs seek leave to serve . . . document requests to obtain critical data related to each Potential Class member hospital. Third, once the above discovery is completed, [p]laintiffs seek leave to serve a supplemental expert report that applies the DPP methodology to the relevant claims data to identify the Final Class. Id. at 6–7 (footnote omitted) (citations omitted). The second request, mirroring plaintiffs’ February 2020 request for “[a]ny and all data concerning hospital outpatient service claims and TRICARE reimbursement for hospital outpatient services claims during the relevant time period,” App. to Pl.’s Disc. Mot. at 23, comprises “(1) all information Kennell used in the DPP calculations for each Potential Class member hospital, as well as the underlying calculations[;] and (2) all hospital outpatient claims meaning the “outpatient services that these individuals receive are paid for . . . by TRICARE.” 25 Oct. JSR at 4, ECF No. 284. As a result of this Medicare arrangement, line items for this group of patients were not extracted as the individuals were mistakenly deemed Medicare, rather than TRICARE, recipients for procedures within the scope of the DPP. Id. The cause of the thirteenth unextracted line item remains unclear. Id. at 5. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 5 of 23 - 6 - data available for each of the Potential Class member hospitals during the relevant time period.” Pl.’s Disc. Mot. at 8–9. This request includes: (1) CMAC rate files “needed to apply the DPP methodology”; (2) “[d]ata on hospital outpatient radiology services claim line items for each Potential Class member hospital”; (3) “[d]ata concerning hospital outpatient services claim line items for each Potential Class member hospital” to verify the radiology files are complete; and (4) “TRICARE Encounter Data (‘TED’) records and Health Care Service Records (‘HCSR’).” Id. at 9–10. II. Applicable Law This court’s application of the Rules of the United States Court of Federal Claims (“RCFC”) is guided by case law interpreting the Federal Rules of Civil Procedure (FRCP). See RCFC rules committee’s note to 2002 revision (“[I]nterpretation of the court’s rules will be guided by case law and the Advisory Committee Notes that accompany the Federal Rules of Civil Procedure.”). Regarding the scope of discovery, the rules of this court provide: [P]arties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case, considering the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit. RCFC 26(b)(1). “Questions of the scope and conduct of discovery are . . . committed to the discretion of the trial court.” Florsheim Shoe Co., Div. of Interco, Inc. v. United States, 744 F.2d 787, 797 (Fed. Cir. 1984). The Court of Federal Claims generally “afford[s] a liberal treatment to the rules of discovery.” Securiforce Int’l Am., LLC v. United States, 127 Fed. Cl. 386, 400 (2016), aff’d in part and vacated in part on other grounds, 879 F.3d 1354 (Fed. Cir. 2018), cert. denied, 139 S. Ct. 478 (2018) (mem.). “[T]he [C]ourt must be careful not to deprive a party of discovery that is reasonably necessary to afford a fair opportunity to develop and prepare the case.” Heat & Control, Inc. v. Hester Indus., Inc., 785 F.2d 1017, 1024 (Fed. Cir. 1986) (quoting FED. R. CIV. P. 26(b)(1) advisory committee’s note to 1983 amendment). Further, “[a] trial court ‘has wide discretion in setting the limits of discovery.’” Schism v. United States, 316 F.3d 1259, 1300 (Fed. Cir. 2002) (quoting Moore v. Armour Pharm. Co., 927 F.2d 1194, 1197 (11th Cir. 1991)). This court has previously found such “discretion [extends to] . . . deciding whether to grant a motion to . . . reopen discovery.” Croman Corp. v. United States, 94 Fed. Cl. 157, 160 (2010) (citing Te-Moak Bands of W. Shoshone Indians of Nev. v. United States, 948 F.2d 1258, 1260 (Fed. Cir. 1991)); see also Wordtech Sys., Inc. v. Integrated Networks Sols., Inc., 609 F.3d 1308, 1322–23 (Fed. Cir. 2010) (citing Coleman v. Quaker Oats Co., 232 F.3d 1271, 1294 (9th Cir. 2000)) (applying Ninth Circuit law in determining trial court did not abuse its discretion in refusing to reopen discovery). Notwithstanding, modification of a court-imposed schedule, including a discovery schedule, may be done “only for good cause and with the judge’s consent.” RCFC 16(b)(4). In Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 6 of 23 - 7 - High Point Design, the Federal Circuit applied Second Circuit law5 when discussing the good cause standard of FRCP 16(b)(4) for amending a case schedule. “When assessing whether good cause has been shown, ‘the primary consideration is whether the moving party can demonstrate diligence.’” High Point Design LLC v. Buyers Direct, Inc., 730 F.3d 1301, 1319 (Fed. Cir. 2013) (quoting Kassner v. 2nd Ave. Delicatessen, Inc., 496 F.3d 229, 244 (2d Cir. 2007)); see also Adv. Software Design Corp. v. Fiserv, Inc., 641 F.3d 1368, 1381 (Fed. Cir. 2011) (“Under the good cause standard, the threshold inquiry is whether the movant has been diligent.” (citing Sherman v. Winco Fireworks, Inc., 532 F.3d 709, 717 (8th Cir. 2008))). This “requires a showing that even with the exercise of due diligence the moving party could not meet the order’s timetable.” Slip Track Sys., Inc. v. Metal-Lite, Inc., 304 F.3d 1256, 1270 (Fed. Cir. 2002) (applying Ninth Circuit law in finding defendant’s attempt to amend the pleadings first required modification of the scheduling order under FRCP 16(b)(4)). Trial courts may also consider “other relevant factors including, in particular, whether allowing the amendment . . . will prejudice [the opposing party].” High Point Design, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); 6A WRIGHT ET AL., FEDERAL PRACTICE AND PROCEDURE § 1522.2 (3d ed. 1998) (“What constitutes good cause sufficient to justify the modification of a scheduling order necessarily varies with the circumstances of each case.”). Prejudice related to the reopening of discovery may involve the delay of proceedings. Wordtech Sys., 609 F.3d at 1322 (“[A] need to reopen discovery and therefore delay the proceedings supports a district court’s finding of prejudice from a delayed motion to amend the complaint.” (quoting Lockheed Martin Corp. v. Network Sols., Inc., 194 F.3d 980, 986 (9th Cir. 1999))). III. Parties’ Arguments Plaintiffs contend “good cause exists for [p]laintiffs to conduct . . . additional limited discovery,” Pls.’ Disc. Mot. at 7–8 (citing Geneva Pharms. Tech. Corp. v. Barr Lab’ys, Inc., Nos. 98 Civ. 861, 99 Civ. 3687, 2005 WL 2132438, at *5 (S.D.N.Y. Sept. 6, 2005)), largely mirroring their “live and pending request for discovery [from February 2020] that[ ha]s been tabled” since October 2020, Tr. at 37:16–17; see SC Tr. at 27:13–23. Specifically, plaintiffs argue, “[t]he Court’s ruling on the [g]overnment’s Summary Judgment Motion is a material event that has fundamentally altered the scope of this case.” Pl.’s Disc. Mot. at 7–8 (citing Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 428 (2022) (“[T]he parties did not adequately brief the narrower proposed class of plaintiffs arising under the remaining claims.”)). Plaintiffs state the requested “discovery will allow them to provide the Court with the information required for the determination of the Final Class, and that this will greatly assist the Court with its ruling on class certification.” Id. at 10. At oral argument, the government acknowledged “[p]laintiffs[’] [2020] request [for] all class hospital data” concerned much of the same information plaintiffs are “asking for now.” Tr. at 103:7–17. The government, however, maintains “plaintiffs’ motion to reopen fact and expert discovery should be denied.” Gov’t’s Disc. Resp. at 17. Specifically, the government argues plaintiffs “filed this case, moved for class certification twice, and proceeded through two full rounds of fact and expert discovery, based upon . . . [p]laintiffs’ view of the law.” Id. at 16. The 5 RCFC 16(b)(4) is identical to the corresponding Rule 16(b)(4) of the Federal Rules of Civil Procedure. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 7 of 23 - 8 - government therefore argues plaintiffs should not be permitted to “reopen fact and expert discovery” simply because “on summary judgment,” the “legal theories that animated plaintiffs’ [previous] discovery and expert reports have been shown . . . to be . . . wrong.” Id. at 16–17. The government argues “[a] party’s realization that it elected to pursue the wrong litigation strategy is not good cause for amending a schedule,” so plaintiffs have failed to show good cause to reopen discovery as they request. Gov’t’s Disc. Resp. at 17 (quoting Sys. Fuels, Inc. v. United States, 111 Fed. Cl. 381, 383 (2013)). Alluding to the standard for reopening discovery, the government argues “no actions by plaintiffs . . . even remotely approximate the showing of diligence required under RCFC 16 . . . .” Id. at 35. The government also argues plaintiffs’ requests “overwhelming[ly] and incurabl[y] prejudice . . . the [g]overnment.” Id. at 38. IV. Whether Good Cause Exists to Reopen Discovery As noted supra Section III, plaintiffs contend “good cause exists for [p]laintiffs to conduct . . . additional limited discovery,” Pls.’ Disc. Mot. at 7–8 (citing Geneva Pharms. Tech. Corp. v. Barr Lab’ys, Inc., Nos. 98 Civ. 861, 99 Civ. 3687, 2005 WL 2132438, at *5 (S.D.N.Y. Sept. 6, 2005)), largely mirroring their “live and pending request for discovery [from February 2020] that[ ha]s been tabled” since October 2020, when the parties agreed to first proceed with the government’s Motion for Reconsideration and Motion for Summary Judgment. Tr. at 37:16– 17; see SC Tr. at 27:13–23. Plaintiffs believe the requested “discovery will allow them to provide the Court with the information required for the determination of the Final Class, and that this will greatly assist the Court with its ruling on class certification.” Pl.’s Disc. Mot. at 10. In contrast, the government asserts plaintiffs have, in two previous rounds of discovery and in their summary judgment briefing, chosen to pursue a litigation strategy based on a class damages model relying on hospital and government data and cannot now justify reopening discovery because they need to change tactics following the Court’s summary judgment ruling limiting the scope of this case to the government’s data. See Gov’t’s Disc. Resp. at 22–23. Specifically, the government contends plaintiffs have neither made the required showing of diligence during past discovery periods to justify modifying the Court’s discovery schedule nor adequately refuted the government’s claim this discovery is prejudicial. See Gov’t’s Disc. Resp. at 28, 35. “A trial court ‘has wide discretion in setting the limits of discovery.’” Schism v. United States, 316 F.3d 1259, 1300 (Fed. Cir. 2002) (quoting Moore v. Armour Pharm. Co., 927 F.2d 1194, 1197 (11th Cir. 1991)). This court has previously found such “discretion [is applicable] in deciding whether to grant a motion to . . . reopen discovery.” Croman Corp. v. United States, 94 Fed. Cl. 157, 160 (2010) (citing Te-Moak Bands of W. Shoshone Indians of Nev. v. United States, 948 F.2d 1258, 1260 (Fed. Cir. 1991)). RCFC 16(b)(4) permits modification of a court-imposed schedule, such as to re-open discovery, “only for good cause and with the judge’s consent.”6 Good cause “requires a showing that even with the exercise of due diligence the moving party could not meet the order’s timetable.” Slip Track Sys., Inc. v. Metal-Lite, Inc., 304 F.3d 1256, 1270 (Fed. Cir. 2002) (applying Ninth Circuit law in finding defendant’s attempt to amend the 6 At oral argument, the parties agreed plaintiffs are requesting the Court reopen discovery, meaning this good cause standard applies. Tr. 99:14–19: “[PLAINTIFFS:] I think, as between [supplementation and reopening], th[ese requests] probably fit[] better in the reopening category as between those two . . . . THE COURT: So . . . the standard for reopening is good cause? [PLAINTIFFS:] Yes. THE COURT: [The government], [do] you agree? [THE GOVERNMENT:] I agree.” Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 8 of 23 - 9 - pleadings first required modification of the scheduling order under FRCP 16(b)(4)). Likewise, in determining whether good cause exists to reopen discovery, a trial court may consider “other relevant factors including, in particular, whether allowing the amendment . . . will prejudice [the opposing party].” High Point Design LLC v. Buyers Direct, Inc., 730 F.3d 1301, 1319 (Fed. Cir. 2013) (quoting Kassner v. 2nd Ave. Delicatessen, Inc., 496 F.3d 229, 244 (2d Cir. 2007)). The Court accordingly must determine whether good cause exists to reopen discovery as requested by plaintiffs by analyzing plaintiffs’ diligence and whether the requested discovery prejudices the government. The Court begins with plaintiffs’ document requests. A. Document Requests Plaintiffs request the government turn over “critical data related to each Potential Class member hospital” and argue “denying ‘precertification discovery where it is necessary to determine the existence of a class is an abuse of discretion.’” Pls.’ Disc. Mot. at 6–7; Pls.’ Disc. Reply at 2 (quoting Perez v. Safelite Grp. Inc., 553 F. App’x 667, 669 (9th Cir. 2014)). These document requests specifically target “(1) all information Kennell used in the DPP calculations for each Potential Class member hospital, as well as the underlying calculations, and (2) all hospital outpatient claims data available for each of the Potential Class member hospitals during the relevant time period.” Pls.’ Disc. Mot. at 8–9. Plaintiffs’ goal is to acquire all “radiology line item[]” data and other information necessary to “apply the DPP methodology” to all of the putative class members’ claims data from the DPP period. Id. at 9. Plaintiffs contend “good cause exists” for the Court to reopen discovery with respect to these documents because the “Court’s ruling on the [g]overnment’s Summary Judgment Motion is a material event that . . . fundamentally altered the scope of this case.” Id. at 8. Namely, plaintiffs’ “damages are now limited to those claims involving errors in the [g]overnment’s data,” so plaintiffs allege this data, which “by its very nature [is] exclusively in [the government’s] possession,” is necessary “to identify the class members.” Pls.’ Disc. Reply at 3; see Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 427–29 (2022). Further, plaintiffs believe reopening discovery for this request is appropriate because in February 2020, during discovery, plaintiffs served a request for production on the government for the same “data concerning hospital outpatient services, claims, and . . . reimbursement for all class hospitals.” Tr. at 41:5–11. Plaintiffs likewise moved to compel this discovery in July 2020. Pls.’ Disc. Reply at 4 (“Plaintiffs also later moved for an order to conduct class discovery or for the [g]overnment to alternatively produce documents for all hospitals.”). Plaintiffs argue tabling this request and motion at the end of October 2020 while the case “proceeded with reconsideration, summary judgment, and other procedural” items did not do away with their “live and pending request for [this] discovery.” Tr. at 41:16–17, 37:14– 25, 128:5–6. With respect to prejudice, plaintiffs clarify their requests “will not prejudice the” government primarily because “the benefit to this case from the discovery would significantly outweigh any burden,” Pls.’ Disc. Reply at 8–9 (first citing Davita HealthCare Partners, Inc. v. United States, 125 Fed. Cl. 394, 402 n.6 (2016); and then citing Kennedy Heights Apartments Ltd. I v. United States, 2005 WL 6112633, at *4 (Fed. Cl. Apr. 26, 2005)), as this discovery will “assist the court with its ruling on class certification.” Pls.’ Disc. Mot. at 9–10. Further, plaintiffs contend any prejudice could be cured at trial by cross-examination of plaintiffs’ expert, who will use this data in a future supplemental report. Pls.’ Disc. Reply at 10 (citing Panasonic Commc’ns Corp. of Am. v. United States, 108 Fed. Cl. 412, 416 (2013)). Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 9 of 23 - 10 - The government argues good cause does not exist to reopen discovery as requested by plaintiffs. With respect to diligence, the government first asserts plaintiffs’ 31 July 2020 Motion regarding class discovery was not diligent because it was filed on the last day of the discovery period. Gov’t’s Disc. Resp. at 33–34. Next, the government argues “[p]laintiffs make no claim that it was impossible during two separate discovery periods for them to have served on the [g]overnment . . . the proposed discovery requests . . . because it obviously was not.” Id. at 28 (citation omitted). Rather, per the government, “plaintiffs disregarded, rather than responding to, evidence, analysis, and law that was inconsistent with their theories of the case, even when the [g]overnment brought such issues to the fore.” Id. at 30. The government argues “plaintiffs ignored these issues at their peril throughout the entire second period of fact and expert discovery that followed, and that means that they were not diligent under the law” and are not now entitled to discovery to assist them in changing their theory of the case. Id. at 31–32. Concerning prejudice, the government argues “plaintiffs are proposing fact discovery on a scale never before undertaken in this case, a new expert report for the [g]overnment to then respond to, more expert depositions, and, no doubt, additional Daubert and class-related motions practice, resulting in substantial delay in bringing this matter to resolution.” Id. at 36 (footnote omitted). “Permitting plaintiffs to now evade a long overdue reckoning, and attempt to moot [the government’s motions to exclude plaintiffs’ expert reports], in addition to being completely contrary to law, [according to the government,] deprives the [g]overnment of its day in court for what should be an imminent resolution of this matter.” Id. 1. Diligence A finding of diligence sufficient to modify a case schedule “requires a showing that even with the exercise of due diligence the moving party could not meet the order’s timetable.” Slip Track Sys., Inc., 304 F.3d at 1270 (applying Ninth Circuit law in finding defendant’s attempt to amend the pleadings first required modification of the scheduling order under FRCP 16(b)(4)). On 11 February 2020, at the very early stages of the “re-opened period of fact discovery,” plaintiffs “served the [g]overnment with additional document requests,” including a request for “[a]ny and all data concerning hospital outpatient services claims and TRICARE reimbursement for hospital outpatient services claims during the relevant time period . . . .” Gov’t’s Disc. Resp. at 11 (citations omitted); see also App. to Pls.’ Disc. Mot. at 23. At the time, the government “objected to this request” and only “produce[d] the data requested for the six [named] plaintiffs.” Id. at 11–12 (citations omitted). Over the next several months, the parties continued with fact and expert discovery, during which time the Court “established a schedule for briefing on class certification and summary judgment.” Id. at 13 (citing Order at 2, ECF No. 143). On 31 July 2020, “the date . . . both fact and expert discovery closed, plaintiffs filed a motion . . . [to] compel[] the [g]overnment to produce documents for all hospitals, rather than for just the six representative plaintiffs.” Id. at 14. Plaintiffs therefore requested the data at issue in this document request at least twice before the instant Motion—once on 11 February 2020 and again on 31 July 2020. Tr. at 81:10–11 (“[PLAINTIFFS:] [W]e did ask for all of those things that [the government is] talking about [before we t]abled the issues . . . .”). They thus argue they “meet [the] diligence [standard] here because [they] asked for” this information “a long time ago” and continued to believe it “was a live and open issue.” Tr. at 128:5–6; Tr. at 104:25–105:9 (“[PLAINTIFFS:] We served the discovery request in the discovery period. We got objections from the [g]overnment. We conferred several times with [the government] about this to figure Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 10 of 23 - 11 - out what we are doing here . . . . We then had the conference with the Court because we had filed our motion [to compel] and the [g]overnment fought to arrange things in the way they arranged. So we walked away from that believing this [discovery] was a live and open issue.”); see 28 Dec. 2022 JSR at 2. The government’s first diligence argument, as noted supra Section IV.A, is plaintiffs filed their Motion to Compel on the final day of discovery and thus did not diligently pursue this request. Gov’t’s Disc. Resp. at 33–34; Tr. at 84:21–23 (“[THE GOVERNMENT:] [Plaintiffs] did nothing between March and July. There was no agreement to table during that four-month period. And then in July, they filed a motion [to compel.]”). Despite the already pending 11 February 2020 “request for all class hospital data,” the government contends plaintiffs “should have filed more motions to compel” earlier in the discovery period. Tr. at 85:4–10; Tr. at 107:2– 5 (“[PLAINTIFFS:] [The government is saying] we raised [these discovery issues] too long ago and didn’t come back often enough.”). To the extent the government alleges “filing a motion to compel on the very last day of discovery is . . . untimely, not diligent,” however, the government overlooks the significance of plaintiffs’ timely February 2020 request. See Gov’t’s Disc. Resp. at 34. Plaintiffs did not first make this request the day discovery closed; they asked the government to produce these documents early in the discovery period. Pls.’ Disc. Mot. at 4–5. Plaintiffs then “conferred several times with” the government and waited to see whether the government’s production would be sufficiently responsive to their February 2020 request despite the government’s objection. Tr. at 104:24–105:6 (“[PLAINTIFFS]: We served the discovery request in the discovery period. We got objections from the [g]overnment. We conferred several times with [the government] about this . . . We then . . . filed our motion. . . .”). Thus, only when it became clear the government was not going to produce plaintiffs’ requested information or any comparable data in the final days of the discovery period did plaintiffs file a motion to compel. Id. Further, the government’s cited cases for the proposition motions filed at the end of discovery are untimely are from out-of-circuit district courts and contain factual situations inapposite to this case. See Gov’t’s Disc. Resp. at 34 (first citing Rainbow Energy Mktg. Corp. v. DC Transco, LLC, No. 21-CV-313, 2022 WL 17365260, at *2 (W.D. Tex. Dec. 1, 2022) (denying a renewed motion to compel after: (1) the plaintiff’s initial motion was denied, (2) the plaintiff filed a motion to extend discovery after the period had closed, and (3) the plaintiff filed a renewed motion to compel on the last day of extended discovery); then citing U.S. ex rel. Gohil v. Sanofi U.S. Servs., Inc., No. 02-2964, 2020 WL 1888966, at *4 (E.D. Pa. Apr. 16, 2020) (rejecting a motion to compel in part because the requesting party made a “misrepresentation that it did not know” the importance of the information until just before the close of discovery); then citing Summy-Long v. Pa. State Univ., No. 06–cv–1117, 2015 WL 5924505, at *2, *5 (M.D. Pa. Oct. 9. 2015) (denying the plaintiff’s motion to compel “because [her] request [wa]s overly broad and unduly burdensome and because granting further discovery extensions . . . would strain the bounds of reasonableness and fairness to all litigants”); then citing In re Sulfuric Acid Antitrust Litig., 231 F.R.D. 331, 332–33, 337 (N.D. Ill. 2005) (acknowledging there is “great[] uncertainty” as to whether courts should deny motions to compel filed “very close to the discovery cut-off date” and recognizing “the matter is [generally] left to the broad discretion” of the trial court “to control discovery”); then citing Toone v. Fed. Express Corp., No. Civ. A. 96-2450, 1997 WL 446257, at *8 (D.D.C. July 30, 1997) (denying the plaintiff’s motion to compel filed on the last day of discovery because (1) given the close proximity to the original date for trial, “the defendant could have responded to the request . . . on Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 11 of 23 - 12 - the day of the original trial date,” and (2) it was moot); and then citing Babcock v. CAE-Link Corp., 878 F. Supp. 377, 387 (N.D.N.Y. 1995) (denying a motion to compel regarding discovery requests served on the last day of discovery). The Court therefore is not persuaded plaintiffs’ Motion to Compel was untimely. Schism, 316 F.3d at 1300 (quoting Moore, 927 F.2d at 1197). The government further contends “[p]laintiffs make no claim that it was impossible during two separate discovery periods for them to have served on the [g]overnment” this “proposed discovery request[].” Def.’s Disc. Resp. at 28. The government argues plaintiffs’ “tunnel vision” with respect to their legal theory caused plaintiffs to ignore “evidence, analysis, and law” not directly consistent with “their theories of the case, even when the [g]overnment brought such issues to the fore.” Id. at 29–30. “Turning a blind eye . . . [due to] legal error is not the same thing as having the inability to meet court deadlines,” according to the government, so “plaintiffs cannot demonstrate the requisite diligence.” Id. at 30. Although plaintiffs did not file “more motions to compel,” plaintiffs timely made their February 2020 request and timely filed their July 2020 Motion to Compel, supra. See Tr. at 85:4–9. To the extent the government alleges plaintiffs are not entitled to reopen discovery to amend their litigation strategy because the government “unmasked on summary judgment” plaintiffs’ “legal errors,” the government overlooks its own admission at oral argument, “[p]laintiffs’ request[s] [for] all class hospital data” in February and July 2020 sought the same data “[plaintiffs a]re asking for now.” Tr. 103:7–17; Def.’s Disc. Resp. at 28. Contrary to the government’s argument, plaintiffs therefore did not have “tunnel vision” causing them to ignore the requested evidence earlier in this litigation. See Def.’s Disc. Resp. at 28–30. Rather, plaintiffs requested this data during the appropriate discovery periods, only to have their request put on hold “because the [g]overnment ha[d] additional motions” it wished the Court to first decide. See Tr. at 85:12–21 (the court); Pls.’ Disc. Mot. at 4; App. to Pl.’s Disc. Mot. at 23; Tr. at 105:5–9; Def.’s Disc. Resp. at 14 (“Ultimately, the issues raised by this motion [to compel] were tabled by agreement of the parties.”); Tr. at 55:5–6 (“[PLAINTIFFS:] [T]he [g]overnment fought tooth and nail [to have the Court] hear [their] summary judgment motion first.”). Plaintiffs have thus considered these requests “a live and open issue” pending resolution of the government’s motions ever since, prompting them to file the instant Motion upon the Court issuing its Summary Judgment Order in November 2022. Tr. at 105:8–9 (“[PLAINTIFFS:] [W]e walked away from that [tabling discussion] believing this [discovery] was a live and open issue.”). Finally, while this “data [may] not [have been] necessary for summary judgment . . . [it is] for class certification.”7 Tr. at 111:10–11 (plaintiffs); Clippinger v. State Farm Mut. Auto. Ins. Co., 2021 WL 1894821, at *2 (“[C]lass certification discovery is not relevant [at the summary judgment stage].”); Tr. at 111:8–11 (“[PLAINTIFFS]: Well, I think like in Clippinger, there is some wisdom to the concept that maybe all of that data is not necessary for summary judgment, but then becomes necessary for class certification.”). To that end, the government 7 To the extent the government relies on plaintiffs’ 13 April 2020 statement plaintiffs “will not need this information [pertaining to hospitals other than the six named plaintiffs] prior to resolving [p]laintiffs’ [M]otion for [C]lass [C]ertification,” the government overlooks the substantial change in circumstances discussed infra Section IV.B.1. See Def.’s Disc. Resp. at 12–13 (quoting 21 May 2020 JSR at 3–4, ECF No. 140). The government likewise ignores plaintiffs’ agreement to table these discovery requests temporarily in October 2020, at which time plaintiffs acknowledged they would eventually re-raise these requests, even if—at the time—the plan was to do so after class certification. See Tr. at 85:12–21, 55:5–6. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 12 of 23 - 13 - cannot “object[] to [plaintiffs’ document] request” in 2020 as “irrelevant and not proportional to the needs of the case insofar as plaintiffs seek . . . [information from] thousands of hospitals” only to now argue it is too late for this discovery and “plaintiffs [have] squandered . . . their allotted discovery periods . . . .” Gov’t’s Disc. Resp. at 11, 28; Tr. at 106:7–14 (“[PLAINTIFFS:] [I]t’s almost like the [g]overnment—they’re playing gotcha here. . . . [T]hey didn’t want to give us the information at the time [of discovery] and then they say, well, here’s summary judgment first and we can defer this until later . . . and now we’ve got a summary judgment opinion and now [they] say gotcha . . . .”). Nor can the government object to turning over the requested data in 2020 and now only to “use [plaintiffs’] lack of this data as a sword” come class certification. Tr. at 126:23–127:2. Indeed, “this has never been a case where” plaintiffs “said we’re not going to look at that [requested] data . . . [or] we’re not eventually going to be coming for that.” Tr. at 126:18–20. To the contrary, plaintiffs “requested this [data] during discovery,” and have long maintained this discovery “is the way to” “figure out . . . what are we dealing with” from a class perspective, including in the JSR filed after the Court’s November 2022 Summary Judgment Order, in which plaintiffs reserved the right to move for “additional class certification fact or expert discovery.” Tr. at 44:10, 56:21–22; 28 Dec. 2022 JSR at 2. By way of the government’s objection to plaintiffs’ February 2020 request and the parties’ tabling this request in October 2020, plaintiffs “even with the exercise of due diligence[,]” could not have obtained the requested information in a way sufficient to “meet the [Court’s discovery] timetable.” Slip Track Sys., Inc., 304 F.3d at 1270. Had they “received the data in 2020,” they “would have . . . run the DPP” for all potential class members as plaintiffs now request the opportunity to. Tr. at 113:12–15. Instead, plaintiffs did not have access to the data so continued to raise this request at all reasonably appropriate times. See Tr. at 112:8–9 (“[PLAINTIFFS:] [I]t was not possible for us to have done this [DPP] calculation without th[is] data.”). The Court accordingly finds plaintiffs were sufficiently diligent to justify a finding of good cause to reopen fact discovery as to plaintiffs’ document request for “critical data related to each Potential Class member hospital,” Pls.’ Disc. Mot. at 6. Slip Track Sys., Inc., 304 F.2d at 1270. 2. Prejudice In considering whether to reopen discovery, a trial court may consider, in addition to the requesting party’s diligence, “other relevant factors including, in particular, whether allowing the amendment . . . will prejudice [the opposing party].” High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244). Prejudice related to the reopening of discovery may involve the delay of proceedings. Wordtech Sys., Inc. v. Integrated Networks Sols., Inc., 609 F.3d 1308, 1322 (Fed. Cir. 2010) (“[A] need to reopen discovery and therefore delay the proceedings supports a district court’s finding of prejudice from a delayed motion to amend the complaint.” (quoting Lockheed Martin Corp. v. Network Sols., Inc., 194 F.3d 980, 986 (9th Cir. 1999))). Further, RCFC 26(b)(1) provides: [P]arties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case, considering the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 13 of 23 - 14 - burden or expense of the proposed discovery outweighs its likely benefit. RCFC 26(b)(1). “Questions of the scope and conduct of discovery are . . . committed to the discretion of the trial court.” Florsheim Shoe Co., Div. of Interco, Inc. v. United States, 744 F.2d 787, 797 (Fed. Cir. 1984). The government contends reopening discovery is prejudicial because “plaintiffs are proposing fact discovery on a scale never before undertaken in this case, a new expert report for the [g]overnment to then respond to, more expert depositions, and, no doubt, additional Daubert and class-related motions practice, resulting in substantial delay in bringing this matter to resolution.” Gov’t’s Disc. Resp. at 35–36 (footnote omitted). Plaintiffs, on the other hand, argue: (1) the sought after data “is . . . exclusively in [the government’s] possession”; and (2) their request will not prejudice the government because it will have an opportunity to oppose plaintiffs’ expert report. Pls.’ Disc. Reply at 3, 8. Even if there is any prejudice to the government, plaintiffs assert “the benefit to this case from the discovery would significantly outweigh any burden to the parties,” id. at 9, because of the assistance the discovery would provide the Court in ruling on class certification of the “narrower proposed class of plaintiffs,” Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 428, left after summary judgment. Id. at 5, 9–10 (first citing Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 428; and then citing Alta Wind I Owner Lessor C v. United States, 154 Fed. Cl. 204, 217 (2021)). Indeed, plaintiffs argue “produc[ing] the data . . . [now will be] more efficient [than production after certification] [a]s there will be less hypothetical back-and-forth between the parties [during certification briefing]” if the government’s data is available to all sides. Tr. at 118:2–6. Any prejudice could also be cured at trial by cross-examination of plaintiffs’ expert, plaintiffs contend. Pls.’ Disc. Reply at 10. The Court’s Summary Judgment Order indicated “the Court . . . needs further information regarding how plaintiffs in this post-summary judgment smaller class would meet the requirements for class certification” before deciding plaintiffs’ motion for class certification. Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 428. To that end, mirroring their requests during the 2020 discovery period, plaintiffs ask the government to provide “(1) all information Kennell used in the DPP calculations for each Potential Class member hospital, as well as the underlying calculations, and (2) all hospital outpatient claims data available for each of the Potential Class member hospitals during the relevant time period.” Pls.’ Disc. Mot. at 8–9 (emphasis added). Plaintiffs argue this discovery will “benefit . . . this case” by providing the radiology data needed to determine “who is in the [now-narrowed] class.” Pls.’ Disc. Reply at 2, 9 (“Plaintiffs’ damages are now limited to those claims involving errors in the [g]overnment’s data”); Tr. at 57:1. The government has not refuted this claim. Tr. at 132:16–25 (“THE COURT: Just to make sure I understand, can you just quickly articulate the prejudice to the [g]overnment [from the Motion to Compel the data]? . . . [THE GOVERNMENT:] The [prejudice from the] [M]otion to [C]ompel is a significant reasonableness and proportionality concern . . . .”); Tr. at 56:18–57:14 (“[PLAINTIFFS:] [W]e really followed the Court’s lead, looking at the summary judgment opinion saying . . . go back and figure out now what we are dealing with . . . [with respect to] who is in the class . . . only on the [g]overnment’s [data] . . . . [THE GOVERNMENT:] I firmly disagree with that [procedural move]. I think that [p]laintiffs are trying to jump their original expert report . . . [a]nd under the law, [they] can’t.”). Rather, the government’s primary prejudice-related allegation is plaintiffs’ request violates the Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 14 of 23 - 15 - “[r]easonableness and proportionality” tenants set forth in RCFC 26(b)(1) because “[t]he [g]overnment has already incurred substantial expense,” Def.’s Disc. Resp. at 36, and plaintiffs have “not established a right to discovery of [non-named plaintiff] hospitals . . . based on what they have shown.” Tr. at 130:10–14. As the Court noted above, the government cannot argue plaintiffs’ document discovery request was too early before summary judgment and too late now that the government has incurred greater expense in litigating this case. See supra Section IV.A.1. Neither party knew the substantial impact summary judgment would have on the trajectory of this case, but the parties agreed to table plaintiffs’ discovery requests until after the Court’s summary judgment decision. See Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 395; SC Tr. at 27:13–23. As evidenced by the recent data analysis performed by the government, after summary judgment, “[g]overnment data is required to evaluate which hospitals were affected by the [g]overnment’s breach of” contract. 25 Oct. 2023 JSR at 16. Plaintiffs cannot be expected to argue, and the Court cannot “rule on[,] numerosity [and related class certification factors] if there[ i]s no evidence regarding the approximate number of hospitals who would fit the . . . requirements allowed in the summary judgment order.” Tr. at 108:6–11. The parties must both have an opportunity to review the relevant data held by the government to determine which hospitals should, or should not, be included in the putative class.8 See id. The requested data, which includes the pertinent “outpatient claims data” and the information “used in the DPP calculations,” Pls.’ Disc. Mot. at 8–9, is therefore highly relevant to the next step in this case— class certification—and, rather than delay this case, having this data will enable the Court to decide plaintiffs’ motion for class certification more efficiently. Davita HealthCare Partners, Inc., 125 Fed. Cl. at 399 (“[T]he additional time . . . does not warrant the severe sanction of exclusion of data helpful to both parties in this litigation.”). To the extent the government argues the scale of the information requested is “grossly disproportionate to the needs of the case,” Tr. at 110:22, the government ignores: (1) plaintiffs’ and the Court’s substantial need to understand “who would be in the class” come time to brief and rule on class certification, Tr. at 55:24–25; and (2) the inability of plaintiffs and the Court to access this data “exclusively in [the government’s] possession” without production by the government, Pls.’ Disc. Reply at 3. See Davita HealthCare Partners, Inc., 125 Fed. Cl. at 399; RCFC 26(b)(1) (“[P]arties may obtain discovery regarding any nonprivileged matter that is relevant . . . and proportional . . . [considering] the parties’ relative access to [the] relevant information . . . .” (emphasis added)). The government likewise overlooks its ability to rebut any arguments plaintiffs make using this data both before and at trial. See Alta Wind I Owner Lessor C, 154 Fed. Cl. at 217 (“Other Court of Federal Claims judges have noted that providing the government an opportunity to file a rebuttal mitigates any prejudice that may have otherwise existed in providing plaintiff the opportunity to reopen the record.”). The documents plaintiffs request are therefore highly relevant and proportional to the needs of the case as they will provide plaintiffs and the Court 8 This is not a case where, as the government alleges, plaintiffs are “attempt[ing] to use discovery to find new clients upon learning of infirmities in the claims of putative class representatives.” Def.’s Disc. Resp. at 26–27 (first citing In re Williams-Sonoma, Inc., 947 F.3d 533, 540 (9th Cir. 2020); then citing Gawry v. Countrywide Home Loans, Inc., 395 F. App’x 152, 160 (6th Cir. 2010); Douglas v. Talk Am., Inc., 266 F.R.D. 464, 467 (C.D. Cal. 2010); Falcon v. Phillips Elec. N. Am. Corp., 304 F. App’x 896, 898 (2d Cir. 2008)). Rather, plaintiffs are requesting access to information held by the government to adequately brief class certification on behalf of the existing named plaintiffs and the putative class. See Pls.’ Disc. Mot. at 2 (“After completion of this discovery, [p]laintiffs would then file an amended motion for class certification.”). Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 15 of 23 - 16 - information necessary for a thorough analysis of class certification. Florsheim Shoe Co., Div. of Interco, Inc., 744 F.2d at 797; Davita HealthCare Partners, 125 Fed. Cl. at 399 (“[T]he additional time . . . does not warrant the severe sanction of exclusion of data helpful to both parties in this litigation.”); RCFC 26(b)(1). The Court accordingly finds any prejudice to the government caused by the scope of plaintiffs’ document request is mitigated by the benefit of the requested information to the efficient resolution of this case.9 See High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); Tr. at 118:2–6. The government will have ample opportunity to oppose any supplemental expert reports presented by plaintiffs using the requested data, including through cross-examination of plaintiffs’ experts at trial. See Alta Wind I Owner Lessor C, 154 Fed. Cl. at 217 (“Other Court of Federal Claims judges have noted that providing the government an opportunity to file a rebuttal mitigates any prejudice that may have otherwise existed in providing plaintiff the opportunity to reopen the record.”). The Court therefore finds plaintiffs were diligent in pursuing this document discovery request and the government will not experience prejudice sufficient to warrant denying plaintiffs’ Motion as to the request. The Court accordingly grants plaintiffs’ document discovery request as tailored, infra Section V, to the liability found in the Court’s November 2022 Summary Judgment Order, as there is good cause to do so. See High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); 6A CHARLES A. WRIGHT ET AL., FEDERAL PRACTICE AND PROCEDURE § 1522.2 (3d ed. 1998) (“What constitutes good cause sufficient to justify the modification of a scheduling order necessarily varies with the circumstances of each case.”); Pls.’ Disc. Mot. at 8–9. B. Plaintiffs’ Request to Depose a Government Corporate Designee Pursuant to Rule 30(b)(6) Plaintiffs also seek leave to “depose a [g]overnment corporate designee to identify which data sources were . . . available to the [g]overnment from the relevant time period, and where the relevant claims data resides.” Pls.’ Disc. Mot. at 8. Plaintiffs specify they are seeking “an hour . . . of deposition, just getting the [g]overnment to . . . confirm . . . the data sources” they have now and had during the relevant time periods “to make sure . . . there’s been no spoliation . . . .” Tr. at 117:22–25. In response, the government contends it previously identified an agency employee “as an individual with ‘discoverable information concerning TRICARE Encounter Data (TED), the DHA Military Health System Data Repository (MDR), and the creation, content and maintenance of records in both of those databases[,]’. . . [but] plaintiffs expressly declined a deposition during the established periods of fact and expert discovery[] and elected instead to proceed through limited interrogatories.” Defs.’ Disc. Resp. at 33. The government alleges “[p]laintiffs cannot reasonably be said to have been diligent in pursuing the 9 The Court emphasizes the government alone is in possession of the TMA data potentially comprising “tens of millions of records.” Tr. at 134:3. As such, the government is the only party capable of sorting and producing the large volumes of information. See RCFC 26(b)(1) (“[P]arties may obtain discovery regarding any nonprivileged matter that is relevant . . . and proportional . . . [considering] the parties’ relative access to [the] relevant information.”). Indeed, at the 19 December 2023 status conference, the government agreed it is capable of reviewing all data in its possession to identify line items of putative class members missed during DPP extraction due to issues akin to those impacting twelve out of the thirteen unextracted line items for Integris Baptist and Integris Bass Baptist. See 25 Oct. 2023 JSR; see also Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 409, 412 (2022). Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 16 of 23 - 17 - deposition that they now request when they intentionally eschewed [an offered deposition] during the established period of fact discovery.” Id. The government also reasserts its prejudice and diligence-related arguments discussed supra Section IV.A.1–2. See, e.g., id. at 28 (“Plaintiffs make no claim that it was impossible during two separate discovery periods for them to have served on the [g]overnment . . . the deposition notice . . . because it obviously was not.”); Tr. at 130:8–10 (“THE COURT: . . . So what’s the prejudice though? [THE GOVERNMENT]: Reasonableness and proportionality.”). 1. Diligence The government’s only novel diligence argument related to plaintiffs’ deposition request is plaintiffs previously declined an opportunity to depose an “an individual with ‘discoverable information concerning [TED and MDR], and the creation, content and maintenance of records in both of those databases.” Defs.’ Disc. Mot. Resp. at 33. The government otherwise broadly asserts plaintiffs were not diligent in pursuing their discovery requests. See, e.g., id. at 28. As determined supra Section IV.A.1, plaintiffs were diligent with respect to pursuing the government’s data and related information at the appropriate time during discovery. See, e.g., App. to Pls.’ Disc. Mot. at 23–24. The Court therefore only addresses the government’s argument related to previous deposition opportunities below. A “trial court ‘has wide discretion in setting the limits of discovery.’” Schism, 316 F.3d at 1300 (quoting Moore, 927 F.2d at 1197). Notwithstanding, modification of a court-imposed schedule may be done “only for good cause and with the judge’s consent.” RCFC 16(b)(4). “When assessing whether good cause has been shown, ‘the primary consideration is whether the moving party can demonstrate diligence.’” High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244). The government’s primary contention—plaintiffs were not diligent in pursuing the requested deposition because they turned down an offer to depose a government employee in May 2019—assumes a party cannot be diligent if they have, at any time in the past, “eschewed [similar discovery.]” Defs.’ Disc. Mot. Resp. at 33. Over the past four and a half years, however, this case has changed substantially. See Alta Wind I Owner Lessor C, 154 Fed. Cl. at 227 (granting additional discovery upon remand and reassignment of the case); see also Geneva Pharms. Tech. Corp., 2005 WL 2132438, at *5 (“[M]aterial events have occurred since the last discovery period, which justice requires that the parties have an opportunity to develop through discovery.”). As noted by plaintiffs, “[t]he Court’s ruling on the [g]overnment’s Summary Judgment Motion . . . fundamentally altered the scope of this case,” Pls.’ Disc. Mot. at 8, by substantially narrowing the potential class members and limiting plaintiffs’ “damages . . . to [two] claims involving errors in the [g]overnment’s data,” Pls.’ Disc. Reply at 3. “[T]o analyze the extent of the . . . error[s]” in the government’s data, 25 Oct. 2023 JSR at 16, and perform “a more accurate damages calculation” for the putative class members, Pls.’ Reply at 7, plaintiffs therefore need to understand the data sources available to the government now and at the time of line item extraction. See 25 Oct. 2023 JSR at 17 (“The only way to evaluate whether Mr. Kennell failed to extract all relevant data . . . for the entire class is for the [g]overnment to produce . . . [the discovery] [p]laintiffs seek.”); Pls.’ Disc. Reply at 7. In 2020, in contrast, at which time plaintiffs “elected . . . to proceed through limited interrogatories” rather than conduct the government’s offered deposition, the Court had not yet narrowed the scope of the case or limited the damages calculations to the government’s data. Defs.’ Disc. Resp. at 33. During the Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 17 of 23 - 18 - initial discovery periods, plaintiffs still reasonably believed their own data might be relevant and did not yet understand the importance of the government’s data. See Pls.’ Disc. Reply at 3; see also 25 Oct. 2023 JSR at 16. Plaintiffs therefore did not exhibit a lack of diligence by not accepting the government’s offer to depose an individual whose testimony, at the time, was less relevant to the case. The government has accordingly failed to produce evidence sufficient to show plaintiffs were not diligent in pursuing the requested deposition. Schism, 316 F.3d at 1300; High Point Design LLC, 730 F.3d at 1319; see also Alta Wind I Owner Lessor C, 154 Fed. Cl. at 227. 2. Prejudice As noted supra Section IV.A.2, courts considering requests to reopen discovery may consider whether and to what extent granting the request will prejudice the opposing party, including via delaying the litigation. High Point Design, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); Wordtech Sys., 609 F.3d at 1322 (quoting Lockheed Martin Corp., 194 F.3d at 986). Regarding plaintiffs’ deposition request, the government argues granting plaintiffs’ deposition request will, like plaintiffs’ document requests, result in additional expense and “substantial delay in bringing this matter to resolution.” Def.’s Disc. Resp. at 36. Plaintiffs indicated at oral argument, however, the requested deposition will be “an hour,” with the goal being simply to understand “the data sources” in the government’s possession. Tr. at 117:22. To the extent this short deposition of a government employee, which the government was prepared to allow for several years ago, will allow the case to proceed “more efficient[ly]” to class certification with fewer “hypothetical back-and-forth[s] between the parties” related to considerations like numerosity, see Tr. at 118:2–6, the Court finds the minimal potential prejudice to the government from this deposition is outweighed by the value of this information to the later stages of this litigation. Davita HealthCare Partners, Inc., 125 Fed. Cl. at 399 (“[T]he additional time . . . does not warrant the severe sanction of exclusion of data helpful to both parties in this litigation.”). The Court therefore does not find the government’s argument regarding diligence or prejudice persuasive with respect to plaintiffs’ deposition request. The Court accordingly grants this request as there is good cause to do so.10 High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); 6A WRIGHT ET AL., FEDERAL PRACTICE AND PROCEDURE § 1522.2 (3d ed. 1998) (“What constitutes good cause sufficient to justify the modification of a scheduling order necessarily varies with the circumstances of each case.”). C. Supplemental Expert Report Plaintiffs finally request leave to “serve a supplemental expert report on . . . relevant class 10 To the extent the government intended its arguments related to proportionality and relevance to apply to plaintiffs’ deposition request, the Court is unpersuaded. See Def.’s Disc. Resp. at 36. A single deposition lasting approximately one hour on subject matter on which the government previously offered to permit a deposition is not disproportionate to the needs of this case. Schism v. United States, 316 F.3d 1259, 1300 (Fed. Cir. 2002) (quoting Moore v. Armour Pharm. Co., 927 F.2d 1194, 1197 (11th Cir. 1991)); RCFC 26(b)(1). Likewise, the subject matter—the sources of the data plaintiffs request access to—is highly relevant in ensuring a complete and accurate data set free of spoliation. Schism, 316 F.3d at 1300 (quoting Moore, 927 F.2d at 1197); RCFC 26(b)(1); see supra Section IV.A.2. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 18 of 23 - 19 - issues” upon completion of the above-requested discovery. Pls.’ Disc. Mot. at 2. Specifically, plaintiffs wish to “submit a supplemental expert report analyzing the [government’s] data and applying the DPP methodology to the correct universe of outpatient radiology line items . . . .” Id. at 10; see also Pls.’ Disc. Reply at 3 (“Plaintiffs’ supplemental expert report would identify the scope of the class, as requested by the Court.”); Tr. at 73:10–14 (“[PLAINTIFFS:] [I]t is a very complex formula. And I think that it is something that . . . you would want someone with experience with these data line items going through and doing it . . . it’s [objective] math. . . . It’s essentially a claims administrator.”). Plaintiffs make clear their initial expert report was an attempt at extrapolating the named plaintiffs’ data “across the class to come up with . . . estimated number[s],” which they now wish to update with “the exact numbers” once they receive the government’s data. Tr. at 69:4–16. Plaintiffs contend “[r]eopening discovery is warranted where supplemental information from an expert would assist the Court in resolving important issues . . . [s]uch [as] . . . ‘presenting the Court with a more accurate representation of plaintiffs’ damages allegations.’” Pls.’ Disc. Reply at 6 (first citing Kennedy Heights Apartments Ltd. I, 2005 WL 6112633, at *3–4; and then quoting Alta Wind I Owner Lessor C, 154 Fed. Cl. at 217). Likening this case to Alta Wind, plaintiffs argue the Court should conclude here an “expert report will provide the Court with a damages estimate more accurately reflecting plaintiffs’ damages position [in light of the changes to the case rendered by summary judgment] . . . and therefore will likely assist the Court.” Id. at 7 (quoting Alta Wind, 154 Fed. Cl. at 216); Tr. at 68:17–69:16 (“[PLAINTIFFS]: With respect to Ms. Jerzak and the breach of contract, she did two things [in her report.] . . . One, she compared the hospital line items to the government line items for the named [p]laintiffs and did a straight objective calculation of what was the difference. . . . She also took those numbers and extrapolated them across the class to come up with an estimated number. THE COURT: A hypothetical. [PLAINTIFFS]: Yes . . . [r]ecognizing that if the class was certified . . . we’d have to do the exact numbers.”). Plaintiffs conclude this report will “not prejudice the [g]overnment in any way, and would actually benefit the [g]overnment” by providing an “opportunity . . . to oppose” additional contentions appropriate to the posture of the case. Id. at 8 (emphasis omitted) (citing Alta Wind I Owner Lessor C, 154 Fed. Cl. at 216). Plaintiffs note, however, “in [their] mind, this [report] is something that always was going to happen after certification” at the merits stage, Tr. at 73:15– 16 (emphasis added), as they do not “need an expert report for class certification because” the government “admitted breach,” Tr. at 96:3–4; Tr. at 63:22–64:6 (“[PLAINTIFFS:] [L]et’s say the Court certified a class here. The next step . . . is for merits. Someone is going to have to spit out a report saying here are the class members and when I run their . . . data . . . here are the differences and here’s the number that gets spit out.” (emphasis added)). The government reiterates its diligence and prejudice arguments discussed supra Sections IV.A–B with respect to plaintiffs’ request for leave to file a supplemental expert report. The government likewise refutes the notion plaintiffs’ current expert report is a “placeholder . . . that was[] [not] really meant to be real.” Tr. at 70:19–20. In other words, the government contends plaintiffs “meant th[eir earlier] expert report” to apply to “their currently pending motion for class cert[ification],” Tr. at 71:21–23, and now “seek to have the Court rescue them from their own litigation choices,” including the choice to file “expert damages models [that] could never be used to measure class damages.” Def.’s Disc. Resp. at 16–17. Plaintiffs should not be permitted to file a new expert report, according to the government, simply because “they have not . . . marshaled any legally cognizable expert evidence concerning the few claims that remain” Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 19 of 23 - 20 - after summary judgment. Def.’s Disc. Resp. at 17. To the extent plaintiffs concede the requested expert report “is for [the] merits” stage and not necessary for class certification, however, the government believes “a class cannot be certified without a viable expert damages methodology meeting the requirements of Comcast,” meaning plaintiffs’ pending motion for class certification automatically fails because “the only expert evidence in the record that bears on the two types of breaches found by the Court is . . . offered by the [g]overnment.” Id. at 22– 23 (citing Comcast Corp. v. Behrend, 569 U.S. 27, 33–34 (2013)). Indeed, according to the government, “plaintiffs are left with no expert model at all as to the few remaining contract claims,” meaning they cannot adequately allege “damages are capable of measurement on a class[-]wide basis” as required by Comcast. Id. at 24 (quoting Comcast, 569 U.S. at 34). Concerning plaintiffs’ request for leave to file an expert report, the government broadly asserts plaintiffs were not diligent in pursuing their discovery requests. See, e.g., Def.’s Disc. Resp. at 28. As determined supra Section IV.A.1, however, plaintiffs were diligent with respect to pursuing the requested discovery generally. Plaintiffs requested the relevant data in February and July 2020 and planned to replace “the extrapolation” present in their earlier expert reports with analysis “using actual data” upon completion of this requested discovery. See supra Section IV.A.1; Tr. at 136:15–23. The Court’s November 2022 Summary Judgement Order narrowed the scope of this case and further highlighted the need for this additional discovery related to the remaining issues and potential class members. See supra Section IV.A.1, B; Ingham Reg’l Med. Ctr., 163 Fed. Cl. at 427. Further, to the extent the government alleges plaintiffs requested expert report is prejudicial, the government will have sufficient time and opportunity to rebut any supplemental expert report filed by plaintiffs. See supra Section IV.A.2, B.2; Alta Wind I Owner Lessor C, 154 Fed. Cl. at 217 (“Other Court of Federal Claims judges have noted that providing the government an opportunity to file a rebuttal mitigates any prejudice that may have otherwise existed in providing plaintiff the opportunity to reopen the record.”). The contemplated expert report, which will perform the DPP analysis for outpatient radiology claims data within the scope of the Court’s November 2022 liability findings for each putative class member hospital using “only the [government’s] data” as required by the Court’s Summary Judgment Order, could also aid the Court at the merits stage in determining “the amount[] that each hospital is owed.” Tr. at 78:7–15. The requested report therefore would likely not be prejudicial to the government to such an extent as to “warrant the severe sanction of exclusion of [useful] data.” Davita HealthCare Partners, Inc., 125 Fed. Cl. at 399; High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); 6A CHARLES A. WRIGHT ET AL., FEDERAL PRACTICE AND PROCEDURE § 1522.2 (3d ed. 1998) (“What constitutes good cause sufficient to justify the modification of a scheduling order necessarily varies with the circumstances of each case.”). Plaintiffs acknowledge, however, the updated calculations they plan to include in their requested expert report are not necessary until “after [class] certification”—at the merits stage. Tr. at 73:15–16. At oral argument, plaintiffs clearly stated they do not “need an expert report for class certification,” which is the next step in this litigation. Tr. at 96:3–4. To the extent the government argues plaintiffs’ certification motion will necessarily fail because plaintiffs lack evidence “damages are [measurable] . . . on a class[-]wide basis” in response to this statement by plaintiffs, Def.’s Disc. Rep. at 23 (quoting Comcast, 569 U.S. at 34), plaintiffs respond the DPP Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 20 of 23 - 21 - is the requisite means of “calculat[ing] damages for every single class member,” Tr. at 136:4–5. While the Court reserves judgment as to plaintiffs’ class certification motion, plaintiffs’ argument the DPP provides their model for calculating damages on a class-wide basis because it is a uniform model applicable to all putative class members is sufficient to suggest plaintiffs need not fully calculate alleged damages in a supplemental expert report at this time. Tr. at 54:9–15 (“[PLAINTIFFS:] I think the type of cases that [the government] is talking about [like Comcast] where there’s been [a failure by the plaintiffs to actually address the calculation of class-wide damages, are inapposite because] we haven’t offered a model that is deviating from the contract. What we’re saying . . . the experts are going to . . . essentially crunch[ the] numbers [using the DPP].”). Plaintiffs can do so if and when the merits of this case are argued at trial. This is not a case like Comcast, in which the plaintiffs presented to the court “a methodology that identifies damages that are not the result of the wrong” at issue. Comcast, 569 U.S. at 37. Here, in contrast, the parties indicated at oral argument plaintiffs’ proffered DPP methodology from the parties’ DPP Contracts appears capable of calculating damages for all potential class members. Tr. at 136:1–5 (“[PLAINTIFFS:] But what I will tell you that we’re going to do with the data is we are going to have the auditor [i.e., the expert] plug [the government’s] data into the DPP. That is the model. That is [what] the contract . . . dictates . . . how you calculate damages for every single class member.”); Tr. at 54:12–13 (“[PLAINTIFFS:] [W]e haven’t offered a model that is deviating from the contract.”); Tr. at 93:2–5 (“THE COURT: But the model is just what you said is—if I understood correctly, is that the report is just DPP data discrepancy output. [THE GOVERNMENT]: For each individual [p]laintiff.”); see Tr. 93:2–95:25. The Court accordingly denies plaintiffs’ request for an expert report without prejudice in the interest of the efficient disposition of plaintiffs’ class certification motion. High Point Design LLC, 730 F.3d at 1319 (quoting Kassner, 496 F.3d at 244); 6A CHARLES A. WRIGHT ET AL., FEDERAL PRACTICE AND PROCEDURE § 1522.2 (3d ed. 1998) (“What constitutes good cause sufficient to justify the modification of a scheduling order necessarily varies with the circumstances of each case.”). To the extent plaintiffs “would want in [the] merits” stage an expert report from an “auditor to make sure” the parties “all agree on” damages calculated via the DPP, plaintiffs may refile this motion at that time. Tr. at 96:12–13 (plaintiffs). V. Scope of Granted Discovery and Next Steps As discussed supra Section IV: 1. The Court grants plaintiffs’ deposition request. 2. The Court grants plaintiffs’ document requests as follows: Plaintiffs are permitted to serve amended document discovery requests for all putative class member hospitals tailored to seek only those documents required for plaintiffs to identify “breach[es] of TMA’s [contractual] duty” under the DPP Contract akin to either: (1) the government’s failure to extract “thirteen line items for Integris Baptist and Integris Bass Baptist”; or (2) the government’s failure to adjust “five . . . line items” for Integris Baptist “during the DPP because of an alternate zip code.” See Ingham Reg’l Med. Ctr. v. United States, 163 Fed. Cl. 384, 409, 412 (2022). This specification ensures plaintiffs’ requests remain within the scope of the Court’s findings of liability in November 2022. Id. The Court notes at the 19 December 2023 status conference the government agreed it is possible to Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 21 of 23 - 22 - execute the same analysis performed on the named plaintiffs’ data in the 25 October 2023 JSR on the government’s data for all putative class members. 11 3. The Court denies plaintiffs’ request to file a supplemental expert report without prejudice. Plaintiffs may move to file an updated expert report later in this litigation as necessary, at which time the government will be permitted to file a response report. Within three weeks of the date this Order is issued, the parties shall file a JSR comprised of the following: 1. Plaintiffs’ discovery requests revised in accordance with the above clarifications; 2. The parties’ proposed schedule for discovery, including a timeline for plaintiffs’ deposition and the exchange of documents between the parties; and 3. The parties’ proposed schedule for re-briefing class certification after all discovery closes, including a proposed timeline for the filing of new expert reports. As noted by the Court at the 19 December status conference, plaintiffs’ next step should be to analyze the government’s data for the six named plaintiffs already in plaintiffs’ possession to assist plaintiffs in tailoring their document requests as discussed above. Further, at the 19 December 2023 status conference, the parties agreed the partial grant of plaintiffs’ Discovery Motion moots plaintiffs’ pending Motion to Certify Class Action and Appoint Class Counsel, ECF No. 146, as the parties will need to re-brief these issues following the narrowing of this case on summary judgment and the upcoming additional discovery. The government agreed its pending Motion to Exclude Inadmissible Evidence Relied Upon in Plaintiffs’ Motion for Class Certification, ECF No. 204, is accordingly moot. The government may refile a similar motion if needed during future class certification briefing. Plaintiffs likewise agreed to withdraw without prejudice their Motion to Exclude the Expert Opinions and Continued Participation of Kennell, ECF No. 251, pending further discovery and briefing. Further, plaintiffs agreed, given the scope of this case after summary judgment, the expert report of Fay is moot. Accordingly, the government’s Motion to Exclude the Expert Opinions of Fay, ECF No. 206, is moot. Finally, plaintiffs stated they plan to file a new expert report replacing that of Jerzak later in this litigation. The government noted at the 19 December status conference plaintiffs’ replacement of Ms. Jerzak’s current report will render the government’s Motion to Exclude the Expert Opinions of Jerzak, ECF No. 205, moot as well. 11 As discussed supra note 4, in the 25 October 2023 JSR, the government explained why twelve of the thirteen line items improperly excluded for Integris Baptist and Integris Bass Baptist were not extracted. At the 19 December 2023 status conference, the government indicated it can now search its database for line items improperly excluded due to this same error for all hospitals that participated in the DPP. The government noted, however, it is not aware of what caused the thirteenth line item to be missed so cannot create search criteria appropriate to identifying other similar misses. Finally, to identify missed alternate zip codes, the government stated it would need zip code information from plaintiffs and the putative class members. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 Page 22 of 23 - 23 - VI. Conclusion For the foregoing reasons, and as specified supra Section V, the Court GRANTS-INPART and DENIES-IN-PART plaintiffs’ Motion for Leave to Conduct Certain Limited Additional Discovery and to Submit Supplemental Expert Report, ECF No. 269, and FINDS as MOOT plaintiffs’ Motion for Clarification or, in the Alternative, to Compel Production, ECF No. 161.12 As noted supra Section V, the Court FINDS as MOOT plaintiffs’ Motion to Certify Class Action and Appoint Class Counsel, ECF No. 146, the government’s Motion to Exclude Inadmissible Evidence Relied Upon in Plaintiffs’ Motion for Class Certification, ECF No. 204, the government’s Motion to Exclude the Expert Opinions of Fay, ECF No. 206, and the government’s Motion to Exclude the Expert Opinions of Jerzak, ECF No. 205. As agreed to at the 19 December 2023 status conference, plaintiffs SHALL WITHDRAW their Motion to Exclude the Expert Opinions and Continued Participation of Kennell, ECF No. 251, without prejudice. Finally, as noted at oral argument, see Tr. at 139:10–140:8, the Court STRIKES the government’s Notice of Additional Authority, ECF No. 273, as deficient and GRANTS the government’s Unopposed Motion for Leave to File Notice of Supplemental Authority, ECF No. 274, for good cause shown. The parties SHALL FILE the joint status report discussed supra Section V on or before 23 January 2024. IT IS SO ORDERED. s/ Holte HOLTE Judge 12 At oral argument, the parties agreed the Court ruling on plaintiffs’ current Discovery Motion is also a “ruling on [plaintiffs’ previous Motion to Compel,] ECF [No.] 161.” Tr. at 139:2–9. Case 1:13-cv-00821-RTH Document 286 Filed 01/02/24 |
Only use the provided context to create your answers. Be concise and limit your response to 250 words or less. | What are the areas of competency required for digital transformation? Give an example of each, if available. | What does it take?
A digital transformation requires five areas of competency:
Adopting new technology
A definitive feature of a digital transformation is that it involves the adoption of new software-based digital
capabilities or technology infrastructure. It’s not necessary that these use emergent technologies such as
Machine Learning, Augmented Reality, or Blockchain, so long as they support innovation, the creation of new
value to customers, or operational efficiencies.
Using data
With digital transformation comes an explosion in the data available about your products and target markets. That
means that collecting, analyzing, and using data to make good decisions is key. Being good at it can give you a
competitive advantage. Being bad is just the opposite. In an increasingly data-led competitive environment, new
strategies must be based on objective information where possible. As put in a 2017 article by The Economist [2],
“The world’s most valuable resource is no longer oil, but data.”
Customer focus
Understanding your customers is critical for any business and even more so when trying new ways of working
in new markets or business areas. These insights help you understand what can potentially increase customer
satisfaction and, ultimately, sales. As a business, it means having the capability to get deep customer insights,
experiment with ideas, and adapt and evolve your offering or business model in response. The mantra ‘constant
change’ applies to your markets, competitors and customers, and your business. Continually learning what your
customers care about (i.e., value) and where the market is heading must be embedded within your organization.
Cross-functional collaboration and processes
Today’s digital products are highly complex and involve many specialists to bring them to market, from skilled
software developers and designers to User eXperience (UX) experts, DevOps, and Product Managers. No one
individual has all the insights or skills necessary for success - a team of specialists working closely together is
needed. In many companies, you see organizational silos, which can make cross-functional teamwork hard. Empire
building and old-fashioned power hierarchies (founded on the human tendency for large egos to dominate) are
often the reason. In these silos, top-down decision making, known as command-and-control, is often the result.
However, this can exclude vital insights from other functions.
Building software products is a creative process, and diverse skills and viewpoints in the team drive better results.
Organization structures and processes that promote cross-functional working can bring out the best from all the
different specialisms and maximize the odds of success. Having everyone’s primary home as an autonomous team
(or squad) focused on a specific product/element is often held up as the ideal example of how to do this. However,
more often we see a product management department with Product Managers leading virtual teams who are
working on their product(s).
Capacity to change and adopt a digital mindset
Digital transformation requires fundamental changes within an organization, which go to the heart of its business
model, culture, and operations. Companies must be willing to take a risk by experimenting with new business
models and new ways of doing things. For example, incentivizing Sales to sell software rather than hardware
or changing the business case approach to see software products as an ongoing development rather than a
one-off project.
A good example is the car industry in Europe. Over the last couple of years, it has been trying to adopt this digital
mindset and move from the paradigm of a hardware-centric product (a one-off sale of a car) to a more softwarecentric approach. With continuous updates to the car’s software this opens up the potential for new additional
and ongoing revenue. This is possible only if the leadership and people in a company adopt a digital mindset.
It’s about thinking about digital as the default approach rather than a series of one-off transformation projects. | What are the areas of competency required for digital transformation? Give an example of each, if available. Only use the provided context to create your answers. Be concise and limit your response to 250 words or less.
What does it take?
A digital transformation requires five areas of competency:
Adopting new technology
A definitive feature of a digital transformation is that it involves the adoption of new software-based digital
capabilities or technology infrastructure. It’s not necessary that these use emergent technologies such as
Machine Learning, Augmented Reality, or Blockchain, so long as they support innovation, the creation of new
value to customers, or operational efficiencies.
Using data
With digital transformation comes an explosion in the data available about your products and target markets. That
means that collecting, analyzing, and using data to make good decisions is key. Being good at it can give you a
competitive advantage. Being bad is just the opposite. In an increasingly data-led competitive environment, new
strategies must be based on objective information where possible. As put in a 2017 article by The Economist [2],
“The world’s most valuable resource is no longer oil, but data.”
Customer focus
Understanding your customers is critical for any business and even more so when trying new ways of working
in new markets or business areas. These insights help you understand what can potentially increase customer
satisfaction and, ultimately, sales. As a business, it means having the capability to get deep customer insights,
experiment with ideas, and adapt and evolve your offering or business model in response. The mantra ‘constant
change’ applies to your markets, competitors and customers, and your business. Continually learning what your
customers care about (i.e., value) and where the market is heading must be embedded within your organization.
Cross-functional collaboration and processes
Today’s digital products are highly complex and involve many specialists to bring them to market, from skilled
software developers and designers to User eXperience (UX) experts, DevOps, and Product Managers. No one
individual has all the insights or skills necessary for success - a team of specialists working closely together is
needed. In many companies, you see organizational silos, which can make cross-functional teamwork hard. Empire
building and old-fashioned power hierarchies (founded on the human tendency for large egos to dominate) are
often the reason. In these silos, top-down decision making, known as command-and-control, is often the result.
However, this can exclude vital insights from other functions.
Building software products is a creative process, and diverse skills and viewpoints in the team drive better results.
Organization structures and processes that promote cross-functional working can bring out the best from all the
different specialisms and maximize the odds of success. Having everyone’s primary home as an autonomous team
(or squad) focused on a specific product/element is often held up as the ideal example of how to do this. However,
more often we see a product management department with Product Managers leading virtual teams who are
working on their product(s).
Capacity to change and adopt a digital mindset
Digital transformation requires fundamental changes within an organization, which go to the heart of its business
model, culture, and operations. Companies must be willing to take a risk by experimenting with new business
models and new ways of doing things. For example, incentivizing Sales to sell software rather than hardware
or changing the business case approach to see software products as an ongoing development rather than a
one-off project.
A good example is the car industry in Europe. Over the last couple of years, it has been trying to adopt this digital
mindset and move from the paradigm of a hardware-centric product (a one-off sale of a car) to a more softwarecentric approach. With continuous updates to the car’s software this opens up the potential for new additional
and ongoing revenue. This is possible only if the leadership and people in a company adopt a digital mindset.
It’s about thinking about digital as the default approach rather than a series of one-off transformation projects. |
"================
<TEXT PASSAGE>
=======
[context document]
================
<QUESTION>
=======
[user request]
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." | Summarize the text in 5 sentences with 20 words. Then provide a two-sentence example of an important quantitative finding. Finally, list the 5 limitations of the study. | Discussion
Our most important finding is the inverse correlation between the presence of a chronic disease and the likelihood of treatment of an unrelated disorder. In no case did the presence of the chronic disease justify withholding an effective medical treatment. The results are compatible with the theory that one disease provides protection against other diseases, but this theory is unlikely to be correct, given medical pathophysiology and shared underlying predisposing factors.19–22 Instead, our findings suggest a shortfall in health care — specifically, that unrelated disorders are relatively neglected in patients with chronic medical diseases.
Our work has several limitations, of which three merit emphasis. First, the study was not a randomized trial: it is not possible to assign patients randomly to have or not to have a chronic disease. Subtle confounding could contribute to and possibly justify the observed differences. However, imbalances related to age, sex, insurance status or carrier, ability to pay, or random chance would not explain the findings. Second, optimal rates of secondary treatments are controversial. In theory, our findings could be explained by postulating the overtreatment of patients who do not have chronic diseases. If true, this postulate could represent a potentially more common failure in medical decision making. Finally, the mechanism underlying the results remains a topic for future research — in particular, the question of whether the second disease is not detected in the presence of the first or whether it is detected but not treated.
The observed results might arise from several sources. Patients with chronic diseases may be exhausted and reluctant to accept multiple interventions. Clinicians are often busy and may strive to keep care simple, particularly if they do not have relatively more time for the patients with relatively more complicated conditions. A chronic disease — particularly chronic psychosis — may also limit communication between patient and doctor. Universal insurance coverage could also contribute if the implicit goal of equity is achieved by doing something for all but a lot for none. The results, however, cannot be attributed either to a tendency toward prescribing multiple medications for the elderly or to barriers in access to medical care, both of which work against finding any negative associations.23–25 Similarly, the results cannot be attributed to fraud in which more than one person uses the same health insurance card.26,27
Unrelated treatments are not always indicated for patients who have chronic diseases. Chronic diseases are sometimes associated with reduced life expectancy, making long-term preventive therapy unrewarding. Adding supplementary medications often increases the risk of unwanted drug interactions and the potential for an adverse event. Prescribing additional medications for an unrelated disorder might also alter a patient's compliance with essential medications and indirectly cause harm. Time constraints, communication problems, the patient's preferences, and the priorities of the specialist involved sometimes make it difficult to address more than one problem effectively in any one patient. Finally, it is often sensible to postpone minor treatments until major problems are resolved.
The unrelated treatments we chose had important implications for each selected chronic disease. Patients with diabetes mellitus are at increased risk for atherosclerosis and may be particularly likely to benefit from estrogen-replacement therapy.28,29 The reserve capacity of patients with pulmonary emphysema is seriously compromised, and they may be unable to tolerate even a small cardiovascular event.30,31 Patients with psychotic syndromes are often sensitive to discomfort and theoretically might have further worsening of their mental status as a result of joint pain.32,33 In all three examples, inadvertent undertreatment may have consequences. Furthermore, these examples are similar to other reported cases of mistakes in the care of patients who have more than one illness.34–36 | "================
<TEXT PASSAGE>
=======
Discussion
Our most important finding is the inverse correlation between the presence of a chronic disease and the likelihood of treatment of an unrelated disorder. In no case did the presence of the chronic disease justify withholding an effective medical treatment. The results are compatible with the theory that one disease provides protection against other diseases, but this theory is unlikely to be correct, given medical pathophysiology and shared underlying predisposing factors.19–22 Instead, our findings suggest a shortfall in health care — specifically, that unrelated disorders are relatively neglected in patients with chronic medical diseases.
Our work has several limitations, of which three merit emphasis. First, the study was not a randomized trial: it is not possible to assign patients randomly to have or not to have a chronic disease. Subtle confounding could contribute to and possibly justify the observed differences. However, imbalances related to age, sex, insurance status or carrier, ability to pay, or random chance would not explain the findings. Second, optimal rates of secondary treatments are controversial. In theory, our findings could be explained by postulating the overtreatment of patients who do not have chronic diseases. If true, this postulate could represent a potentially more common failure in medical decision making. Finally, the mechanism underlying the results remains a topic for future research — in particular, the question of whether the second disease is not detected in the presence of the first or whether it is detected but not treated.
The observed results might arise from several sources. Patients with chronic diseases may be exhausted and reluctant to accept multiple interventions. Clinicians are often busy and may strive to keep care simple, particularly if they do not have relatively more time for the patients with relatively more complicated conditions. A chronic disease — particularly chronic psychosis — may also limit communication between patient and doctor. Universal insurance coverage could also contribute if the implicit goal of equity is achieved by doing something for all but a lot for none. The results, however, cannot be attributed either to a tendency toward prescribing multiple medications for the elderly or to barriers in access to medical care, both of which work against finding any negative associations.23–25 Similarly, the results cannot be attributed to fraud in which more than one person uses the same health insurance card.26,27
Unrelated treatments are not always indicated for patients who have chronic diseases. Chronic diseases are sometimes associated with reduced life expectancy, making long-term preventive therapy unrewarding. Adding supplementary medications often increases the risk of unwanted drug interactions and the potential for an adverse event. Prescribing additional medications for an unrelated disorder might also alter a patient's compliance with essential medications and indirectly cause harm. Time constraints, communication problems, the patient's preferences, and the priorities of the specialist involved sometimes make it difficult to address more than one problem effectively in any one patient. Finally, it is often sensible to postpone minor treatments until major problems are resolved.
The unrelated treatments we chose had important implications for each selected chronic disease. Patients with diabetes mellitus are at increased risk for atherosclerosis and may be particularly likely to benefit from estrogen-replacement therapy.28,29 The reserve capacity of patients with pulmonary emphysema is seriously compromised, and they may be unable to tolerate even a small cardiovascular event.30,31 Patients with psychotic syndromes are often sensitive to discomfort and theoretically might have further worsening of their mental status as a result of joint pain.32,33 In all three examples, inadvertent undertreatment may have consequences. Furthermore, these examples are similar to other reported cases of mistakes in the care of patients who have more than one illness.34–36
https://www.nejm.org/doi/full/10.1056/NEJM199805213382106
================
<QUESTION>
=======
Summarize the text in 5 sentences with 20 words. Then provide a two-sentence example of an important quantitative finding. Finally, list the 5 limitations of the study.
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." |
You are given a reference document. You must only use information found in the reference document to answer the question asked. | In what ways do companies within the sharing economy (Uber, Airbnb) evade governmental regulations but remain in business? | Case Studies in Ethics: Teaching Caselettes dukeethics.org
This work is licensed under the Creative Commons Attribution - Noncommercial - No
Derivative Works 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. You may reproduce this work for non-commercial use if you use
the entire document and attribute the source: The Kenan Institute for Ethics at Duke University.
The term “Sharing Economy” refers to individuals directly interacting with each
other online to exchange goods and services, which is also known as collaborative
consumption. Individuals connect to each other through websites or phone applications, like Airbnb or Uber, which create the market space for peer-to-peer interactions. Through these sites and apps, people rent out their extra rooms on Airbnb,
or rent the empty backseats of their car for travellers on Uber. However, Uber and
Airbnb are beginning to face regulation concerns, which introduces the questions
of what, why and how to regulate these companies. This case study will use the
ethical frameworks of utilitarianism and Rawlsianism to address the regulatory
issues of collaborative consumption, specifically the companies Airbnb and Uber.
This case study was completed under the direction of Dr. Amber Díaz Pearson, The Kenan
Institute for Ethics.
THE ETHICS OF REGULATING
THE SHARING ECONOMY
Teaching Caselette
Alexandra Zrenner
Case Studies in Ethics 2 dukeethics.org
Background
Collaborative consumption has many economic benefits: the decline of transaction costs, increased efficiency
and thus increased profits. The Internet, namely websites and phone applications, minimizes the transaction costs
– the cost to the producer and consumer to conduct business – by directly connecting suppliers and consumers.
Collaborative consumption companies also enable individuals to sell the unused potential of an owned good for
another’s temporary use, which further increases efficiency. Owners profit from the unused potential, and consumers
save from renting rather than owning. The result of the growing collaborative economy and its efficiency is a peerto-peer rental market worth $26 billion.1
Airbnb and Uber are two popular examples of collaborative consumption companies. Airbnb is a website on which
Hosts offer their homes, or rooms in their homes, to Guests for visits. Uber is an application for smartphones that
connects Riders to pay Drivers for a ride. The companies’ Terms of Service, which users agree to upon using Airbnb
or Uber, define the companies as the platforms that facilitate the transactions between all users and the company.
Given these Terms of Service, Airbnb and Uber are not held to full legal responsibility for the actions of the site or
application users.
Regulations
The practices of Airbnb and Uber create regulatory concerns related to market competition, consumer protections,
and the legality of the companies’ practices. Regulators must balance protecting established industries and assisting
developing industries; decide to what standards of consumer safety the companies should be held; and determine the
legality of the new companies’ practices.
Competition
The expectation in a capitalist and competitive economy is that innovation encourages competition and vice versa,
which further increases efficiency. Airbnb and Uber use innovative online technologies to offer consumers new ways
to find a place to stay or a ride. For example, consumers can choose to pay an Airbnb Host or a range of hotels for
similar services, so Airbnb and hotels must compete for the consumer. Airbnb can offer lower prices to a consumer
due to the company’s use of innovative online technology that minimizes production and transaction costs. The hotel
industry has higher costs that must cover workers’ wages and property maintenance.
Stephen Dubner, writer of Freakonomics: The Hidden Side of Everything and host of the podcast by the same name,
discussed how the innovations of Airbnb and Uber might fit the “creative destruction” model in economics.2
The
term “creative destruction” was first presented in 1942 by Austrian economist Joseph Schumpeter to describe how
new innovations or companies compete with the established technologies or companies, and the success of the new
means the disappearance of the established.3Within this model, the prediction is that, more consumers would choose
Airbnb and Uber, increasing Airbnb’s and Uber’s profits. Simultaneously, the profits of the hotel and taxi companies
would fall until every hotel and taxi company leaves the industry, leading to the disappearance of those industries.
1 “The Rise of the Sharing Economy.” The Economist. The Economist Newspaper, 09 Mar. 2013. Web. 27 Mar. 2015.
2 Dubner, Stephen. “Re: Regulate This!” Audio blog comment. Freakonomics: The Hidden Side of Everything. Freakonomics, LLC, 4 Sept.
2014. Web. 1 Apr. 2015.
3 W. Michael Cox and Richard Alm, “Creative Destruction.” The Concise Encyclopedia of Economics. 2008. Library of Economics and Liberty.
27 March 2015. <http://www.econlib.org/library/Enc/CreativeDestruction.html>.
Case Studies in Ethics 3 dukeethics.org
Airbnb, Uber, hotels and taxi companies, and regulators are aware of the creative destruction model’s prediction.
In this model, Airbnb and Uber would be “winners”: the competition between Airbnb and Uber and hotels and taxi
companies should increase the overall welfare of those within the room- and ride-providing markets.
On the other hand, local, state and federal policymakers aiming to protect careers in the hotel and taxi industries
argue that the disappearance of these industries would cause greater harm to society than the improvements that
would result from the newer companies.
Consumer Protections
Many established regulations for the hotel and taxi industry exist for consumer protection: safety standards, antidiscrimination laws, etc. However, Airbnb and Uber are not a hotel or taxi service; they are platforms that are not
directly legally responsible for the same standards a hotel or taxi may be responsible to. Airbnb and Uber argue
that they developed methods to be regulated by themselves and their users. Competitive economic theory holds that
producer- or consumer-based methods of regulation will provide the most efficient outcome, and that government
regulations are comparatively inefficient. Producer-based regulation is called “delegated regulation”: the local
government sets standards and allows Airbnb and Uber to determine whether or not they met these standards
themselves. The consumer-based method of regulation is reviews: users write reviews of Airbnb Hosts and Uber
Drivers, incentivizing positive Host and Driver behavior, and regulating the quality and standards of the Hosts and
Drivers.
Regulators concerned with consumer safety do not consider delegated regulation and consumer reviews adequate
substitutes for government standards of consumer protection. Consumer reviews may not address fire standards of
the apartment or emissions standards of a car. In addition, producer-based delegated regulation lacks accountability
measures to ensure the companies protect the consumers. While the government can set the standards, the companies
must hold themselves accountable, which worries regulators.
Consumer-safety regulators and sometimes even Uber customers criticize Uber’s practice of surge pricing as a
violation of consumer protections. Uber uses an algorithm to surge prices – an increase of prices resulting from an
increase in demand (as economic theory would predict). The increase in price should signal more drivers to offer
rides. This in turn should increase consumers’ welfare since they have more access to the service, and the drivers’
welfare should increase from receiving higher profits.
Although Uber references this economic theory to explain the use of surge pricing, some critics have questioned the
wisdom of allowing an algorithm in all situations and scenarios. For example, Uber’s algorithm surged prices during
the Sydney, Australia shooting and during natural disasters like Northeastern winter storms. Consumers do not know
the algorithm, and question if Uber is abusing the consumer’s safety during natural disasters or perilous situations.
Taxes & Legality of Practice
Airbnb and Uber have been criticized for their Hosts and Drivers not complying with city, state or federal law. In
cities where Airbnb and Uber operate, the legality of renting out your empty rooms or backseats varies and may be
ambiguous depending on the location.4
New York and California have existing laws on zoning, home rentals, and
taxi regulations that address the legality of ride-sharing or charging guests for temporary home or room rentals.
In many cities, temporary home or room rentals or charging individuals for rides require city permission. However,
4 Streitfeld, David. “Airbnb Listings Mostly Illegal, New York State Contends.” The New York Times. The New York Times, 15 Oct. 2014.
Web. 27 Mar. 2015.
Case Studies in Ethics 4 dukeethics.org
the average Host or Driver does not seek out city operating permits because the Hosts and Drivers don’t know how
to obtain the permits, or don’t think they would need a permit for their primary residence or car.
Governments, mainly at the local and state levels, are concerned about the questionable legality of Airbnb and Uber
transactions, and the lack of taxes collected from these transactions. To address these concerns, Airbnb is reaching
out to cities and states to help legislators draft or adjust legislation. In addition, on the “Frequently Asked Questions”
page, Airbnb informs and requires Hosts to be aware of and comply with local laws and their landlord’s rental
policies, both of which may prohibit short-term rentals. Airbnb has worked with San Francisco, Portland, New York
and the District of Columbia to address the concerns of the city and to help legislatures draft new laws to ensure that
Airbnb users do not violate city laws. For example, Airbnb recently started to collect hotel taxes from Washington,
D.C. Hosts and to send the collected funds directly to the city. 5
In doing so, Airbnb protects the information of its
Hosts so that the city could not punish the Hosts for the ambiguous legality of their actions, and ensures the city
does not financially suffer from loss of tax revenue. Airbnb’s active efforts to work with governments to address
regulatory concerns are an example of self-regulation happening within the market. As such, some supporters of
Airbnb argue that the company does not need any additional outside governmental regulations.
Uber, in contrast, has not reached out to address legislative or tax concerns, and its main concern is regulatory.
Uber’s interactions with governments have resulted in government delegating regulatory responsibilities to Uber
rather than the typical arrangement of the government regulating the company.6
Some advocates argue that this
delegated regulation is efficient, so any further government regulation would be inefficient and unnecessary.
Nonetheless, regulators may be concerned that these Airbnb and Uber-created regulations are still insufficient to
meet government standards.
5 Badger, Emily. “Airbnb Is about to Start Collecting Hotel Taxes in More Major Cities, including Washington.” Washington Post. The Washington Post, 29 Jan. 2015. Web. 27 Mar. 2015.
6 Uber argues that its methods regulating Drivers with background checks are sufficient if not better than government background checks. Uber
is pushing and lobbying against regulations that are similar to taxi regulations that may hinder Uber growth. Regulators, however, question Uber’s
concern of legality and consumer safety.
Isaac, Mike. “Uber’s System for Screening Drivers Draws Scrutiny.” The New York Times. The New York Times, 09 Dec. 2014. Web. 07 Apr.
2015. | You are given a reference document. You must only use information found in the reference document to answer the question asked.
In what ways do companies within the sharing economy (Uber, Airbnb) evade governmental regulations but remain in business?
Case Studies in Ethics: Teaching Caselettes dukeethics.org
This work is licensed under the Creative Commons Attribution - Noncommercial - No
Derivative Works 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. You may reproduce this work for non-commercial use if you use
the entire document and attribute the source: The Kenan Institute for Ethics at Duke University.
The term “Sharing Economy” refers to individuals directly interacting with each
other online to exchange goods and services, which is also known as collaborative
consumption. Individuals connect to each other through websites or phone applications, like Airbnb or Uber, which create the market space for peer-to-peer interactions. Through these sites and apps, people rent out their extra rooms on Airbnb,
or rent the empty backseats of their car for travellers on Uber. However, Uber and
Airbnb are beginning to face regulation concerns, which introduces the questions
of what, why and how to regulate these companies. This case study will use the
ethical frameworks of utilitarianism and Rawlsianism to address the regulatory
issues of collaborative consumption, specifically the companies Airbnb and Uber.
This case study was completed under the direction of Dr. Amber Díaz Pearson, The Kenan
Institute for Ethics.
THE ETHICS OF REGULATING
THE SHARING ECONOMY
Teaching Caselette
Alexandra Zrenner
Case Studies in Ethics 2 dukeethics.org
Background
Collaborative consumption has many economic benefits: the decline of transaction costs, increased efficiency
and thus increased profits. The Internet, namely websites and phone applications, minimizes the transaction costs
– the cost to the producer and consumer to conduct business – by directly connecting suppliers and consumers.
Collaborative consumption companies also enable individuals to sell the unused potential of an owned good for
another’s temporary use, which further increases efficiency. Owners profit from the unused potential, and consumers
save from renting rather than owning. The result of the growing collaborative economy and its efficiency is a peerto-peer rental market worth $26 billion.1
Airbnb and Uber are two popular examples of collaborative consumption companies. Airbnb is a website on which
Hosts offer their homes, or rooms in their homes, to Guests for visits. Uber is an application for smartphones that
connects Riders to pay Drivers for a ride. The companies’ Terms of Service, which users agree to upon using Airbnb
or Uber, define the companies as the platforms that facilitate the transactions between all users and the company.
Given these Terms of Service, Airbnb and Uber are not held to full legal responsibility for the actions of the site or
application users.
Regulations
The practices of Airbnb and Uber create regulatory concerns related to market competition, consumer protections,
and the legality of the companies’ practices. Regulators must balance protecting established industries and assisting
developing industries; decide to what standards of consumer safety the companies should be held; and determine the
legality of the new companies’ practices.
Competition
The expectation in a capitalist and competitive economy is that innovation encourages competition and vice versa,
which further increases efficiency. Airbnb and Uber use innovative online technologies to offer consumers new ways
to find a place to stay or a ride. For example, consumers can choose to pay an Airbnb Host or a range of hotels for
similar services, so Airbnb and hotels must compete for the consumer. Airbnb can offer lower prices to a consumer
due to the company’s use of innovative online technology that minimizes production and transaction costs. The hotel
industry has higher costs that must cover workers’ wages and property maintenance.
Stephen Dubner, writer of Freakonomics: The Hidden Side of Everything and host of the podcast by the same name,
discussed how the innovations of Airbnb and Uber might fit the “creative destruction” model in economics.2
The
term “creative destruction” was first presented in 1942 by Austrian economist Joseph Schumpeter to describe how
new innovations or companies compete with the established technologies or companies, and the success of the new
means the disappearance of the established.3Within this model, the prediction is that, more consumers would choose
Airbnb and Uber, increasing Airbnb’s and Uber’s profits. Simultaneously, the profits of the hotel and taxi companies
would fall until every hotel and taxi company leaves the industry, leading to the disappearance of those industries.
1 “The Rise of the Sharing Economy.” The Economist. The Economist Newspaper, 09 Mar. 2013. Web. 27 Mar. 2015.
2 Dubner, Stephen. “Re: Regulate This!” Audio blog comment. Freakonomics: The Hidden Side of Everything. Freakonomics, LLC, 4 Sept.
2014. Web. 1 Apr. 2015.
3 W. Michael Cox and Richard Alm, “Creative Destruction.” The Concise Encyclopedia of Economics. 2008. Library of Economics and Liberty.
27 March 2015. <http://www.econlib.org/library/Enc/CreativeDestruction.html>.
Case Studies in Ethics 3 dukeethics.org
Airbnb, Uber, hotels and taxi companies, and regulators are aware of the creative destruction model’s prediction.
In this model, Airbnb and Uber would be “winners”: the competition between Airbnb and Uber and hotels and taxi
companies should increase the overall welfare of those within the room- and ride-providing markets.
On the other hand, local, state and federal policymakers aiming to protect careers in the hotel and taxi industries
argue that the disappearance of these industries would cause greater harm to society than the improvements that
would result from the newer companies.
Consumer Protections
Many established regulations for the hotel and taxi industry exist for consumer protection: safety standards, antidiscrimination laws, etc. However, Airbnb and Uber are not a hotel or taxi service; they are platforms that are not
directly legally responsible for the same standards a hotel or taxi may be responsible to. Airbnb and Uber argue
that they developed methods to be regulated by themselves and their users. Competitive economic theory holds that
producer- or consumer-based methods of regulation will provide the most efficient outcome, and that government
regulations are comparatively inefficient. Producer-based regulation is called “delegated regulation”: the local
government sets standards and allows Airbnb and Uber to determine whether or not they met these standards
themselves. The consumer-based method of regulation is reviews: users write reviews of Airbnb Hosts and Uber
Drivers, incentivizing positive Host and Driver behavior, and regulating the quality and standards of the Hosts and
Drivers.
Regulators concerned with consumer safety do not consider delegated regulation and consumer reviews adequate
substitutes for government standards of consumer protection. Consumer reviews may not address fire standards of
the apartment or emissions standards of a car. In addition, producer-based delegated regulation lacks accountability
measures to ensure the companies protect the consumers. While the government can set the standards, the companies
must hold themselves accountable, which worries regulators.
Consumer-safety regulators and sometimes even Uber customers criticize Uber’s practice of surge pricing as a
violation of consumer protections. Uber uses an algorithm to surge prices – an increase of prices resulting from an
increase in demand (as economic theory would predict). The increase in price should signal more drivers to offer
rides. This in turn should increase consumers’ welfare since they have more access to the service, and the drivers’
welfare should increase from receiving higher profits.
Although Uber references this economic theory to explain the use of surge pricing, some critics have questioned the
wisdom of allowing an algorithm in all situations and scenarios. For example, Uber’s algorithm surged prices during
the Sydney, Australia shooting and during natural disasters like Northeastern winter storms. Consumers do not know
the algorithm, and question if Uber is abusing the consumer’s safety during natural disasters or perilous situations.
Taxes & Legality of Practice
Airbnb and Uber have been criticized for their Hosts and Drivers not complying with city, state or federal law. In
cities where Airbnb and Uber operate, the legality of renting out your empty rooms or backseats varies and may be
ambiguous depending on the location.4
New York and California have existing laws on zoning, home rentals, and
taxi regulations that address the legality of ride-sharing or charging guests for temporary home or room rentals.
In many cities, temporary home or room rentals or charging individuals for rides require city permission. However,
4 Streitfeld, David. “Airbnb Listings Mostly Illegal, New York State Contends.” The New York Times. The New York Times, 15 Oct. 2014.
Web. 27 Mar. 2015.
Case Studies in Ethics 4 dukeethics.org
the average Host or Driver does not seek out city operating permits because the Hosts and Drivers don’t know how
to obtain the permits, or don’t think they would need a permit for their primary residence or car.
Governments, mainly at the local and state levels, are concerned about the questionable legality of Airbnb and Uber
transactions, and the lack of taxes collected from these transactions. To address these concerns, Airbnb is reaching
out to cities and states to help legislators draft or adjust legislation. In addition, on the “Frequently Asked Questions”
page, Airbnb informs and requires Hosts to be aware of and comply with local laws and their landlord’s rental
policies, both of which may prohibit short-term rentals. Airbnb has worked with San Francisco, Portland, New York
and the District of Columbia to address the concerns of the city and to help legislatures draft new laws to ensure that
Airbnb users do not violate city laws. For example, Airbnb recently started to collect hotel taxes from Washington,
D.C. Hosts and to send the collected funds directly to the city. 5
In doing so, Airbnb protects the information of its
Hosts so that the city could not punish the Hosts for the ambiguous legality of their actions, and ensures the city
does not financially suffer from loss of tax revenue. Airbnb’s active efforts to work with governments to address
regulatory concerns are an example of self-regulation happening within the market. As such, some supporters of
Airbnb argue that the company does not need any additional outside governmental regulations.
Uber, in contrast, has not reached out to address legislative or tax concerns, and its main concern is regulatory.
Uber’s interactions with governments have resulted in government delegating regulatory responsibilities to Uber
rather than the typical arrangement of the government regulating the company.6
Some advocates argue that this
delegated regulation is efficient, so any further government regulation would be inefficient and unnecessary.
Nonetheless, regulators may be concerned that these Airbnb and Uber-created regulations are still insufficient to
meet government standards.
5 Badger, Emily. “Airbnb Is about to Start Collecting Hotel Taxes in More Major Cities, including Washington.” Washington Post. The Washington Post, 29 Jan. 2015. Web. 27 Mar. 2015.
6 Uber argues that its methods regulating Drivers with background checks are sufficient if not better than government background checks. Uber
is pushing and lobbying against regulations that are similar to taxi regulations that may hinder Uber growth. Regulators, however, question Uber’s
concern of legality and consumer safety.
Isaac, Mike. “Uber’s System for Screening Drivers Draws Scrutiny.” The New York Times. The New York Times, 09 Dec. 2014. Web. 07 Apr.
2015. |
Using only the information in the provided context give your answer in bullet points. | What were all of the findings on the non-human test subjects? | Because the brain is particularly vulnerable to oxidative stress, and anxiety disorders are characterized by a decrease in protective antioxidants as well as an increase in oxidative damage, treatments that protect against oxidative stress are desirable (54, 55). Multiple suggested mechanisms have been suggested to explain how oxidative stress plays a role in brain disorders. Oxidative stress can act as a trigger for these disorders and can also be a result of neuroinflammation, which has also been linked to brain-related disorders (56). Figure 3 shows how the components of ashwagandha can affect cell signaling pathways and the production of inflammatory mediators. Inflammation in peripheral tissues may directly play a role in increasing neuroinflammation and oxidative stress in the brain (56). Although there is evidence linking both oxidative stress and inflammation to brain-related disorders, it has not been conclusively proven which one causes the other or vice versa (57). Studies on animals exploring the effects of WS on anxiety have revealed a strong link between improvements in anxiety-related behavior and the amelioration of oxidative stress and inflammatory indicators.
The root extract and leaf extract of WS, although not clearly described, were able to increase the levels of catalase activity and reduced glutathione (GSH) in the brain, and also lower the levels of lipid peroxidation in a mouse model of sleep deprivation and a zebrafish model of neurotoxicity induced by benzo[a]pyrene (58, 59). Furthermore, WS demonstrated the ability to decrease nitrite levels in the mouse model and also lower protein carbonylation in the zebrafish model (58, 59). In an experiment using rats to simulate an ischemic stroke, a uniform hydroalcoholic extract derived from WS roots effectively decreased the levels of lipid peroxidation and enhanced antioxidant function in the brain (60). ASH-WEX, an aqueous leaf extract, was shown to decrease pro-inflammatory cytokines, specifically TNFα and IL-6, both in the peripheral and central nervous systems, in animal models of neuroinflammation and sleep deprivation (61). ASH-WEX produced a significant reduction in indicators of reactive gliosis, such as GFAP, as well as neuroinflammation, such as NOX2, iNOS, and COX2. Furthermore, it successfully regulated various inflammatory pathways and reduced cellular death in the brain (62). In a rat experiment examining the effects of a high fat diet on obesity, the use of WS dry leaf powder significantly lessened the expression of pro-inflammatory cytokines both in the body and brain, decreased indicators of reactive gliosis and neuroinflammation, modulated the nuclear factor NF-kappa-B (NF-κB) pathway, and lowered cell death (63).
Supplementation with ashwagandha has been shown to lower C-reactive protein (CRP) activity (64), one of the most important indicators of inflammation. Furthermore, other studies have shown that ashwagandha exerts a number of anti-inflammatory effects on chronic inflammation-mediated diseases (65), particularly rheumatoid arthritis (66), inflammatory bowel disorder (67), and systemic lupus erythematosus (68). These studies suggest that ashwagandha may be a useful tool for reducing the cytokine storm. Withanolides, particularly Withaferin A, are responsible for the majority of ashwagandha’s anti-inflammatory effects, according to several reports (69, 70). Ashwagandha may work by interacting with components of the proinflammatory cell signaling pathway, such as NF-κB, signaling kinases, HSP90, Nrf2, and the inflammasome complex, even though the mechanisms behind the anti-inflammatory effect of withanolides are not fully understood (71). Since the NF-κB transcription factor family is implicated in a number of chronic disorders caused by inflammation, individuals with high NF-κB levels may benefit from therapeutic targeting of NF-κB. In this situation, ashwagandha has the ability to inhibit and mediate the activity of the NF-κB pathway (72). It is thought that strong protein kinase inhibitor activity is necessary for ashwagandha to function. Ashwagandha has the ability to inhibit the signaling cascades of protein kinases, which are essential in inflammatory pathways (72). Furthermore, it appears that kinase inhibition takes place when nitric oxide synthesis is inhibited, which also benefits the inflammatory process. Another possible explanation for ashwagandha’s anti-inflammatory properties is the downregulation or destabilization of HSP activity, which is implicated in regulatory kinase pathways. As previously mentioned, ashwagandha regulates Nrf2 to moderate oxidative stress (34, 73). Nrf2 activation may account for ashwagandha’s anti-inflammatory properties, as oxidative stress frequently takes place in sites of inflammation and is thought to be one cause of chronic inflammation (74). Finally, by blocking inflammasomes, cytokines, and other multiprotein pro-inflammatory complexes, ashwagandha may lessen inflammation (7
According to some reports, ashwagandha is an adaptogen that, due to its antioxidant properties, boosts immunity, helps the body respond to stress more effectively, increases resilience, and fights oxidative stress and cellular damage (76, 77). It has been demonstrated that the bioactive C28-steroidal lactones present in WS leaves possess neuroprotective, anxiolytic, antioxidant, and anti-inflammatory properties. Moreover, Withanoside IV and its metabolite sominone, found in WS roots, have been shown to promote synaptogenesis and neuronal outgrowth. Furthermore, it has been demonstrated that the herb inhibits acetylcholinesterase and protects rats from cognitive decline (78). One study evaluated the effects of ashwagandha root extract 300 mg twice a day in humans with moderate cognitive impairment (79). After 8 weeks, the treatment group outperformed the placebo group in tests measuring immediate and general memory, information-processing speed, executive function, and attention. However, the benefits on working memory and visuospatial processing were not definitive because there was little difference between the two groups’ performance on these tasks. Another study examined the effect of ashwagandha extract on cognitive impairment in individuals with bipolar disorder (80). Tests were conducted at baseline and after the intervention, with rats randomly allocated to receive 500 mg/day of ashwagandha or a placebo for 8 weeks. When compared to the placebo, subjects in the treatment group showed significantly better results on the Flanker Test (neutral mean reaction time), the Penn Emotional Acuity Test (mean social cognition response rating), and the Auditory Digit Span (mean digit span backward). These findings suggested that ashwagandha extract may safely enhance cognitive function in bipolar disorder patients, including verbal working memory, response time, and social cognition response. Another study was carried out using ashwagandha root extract on a group of horses. The animals were subjected to a variety of stressors, including loud noises, prolonged physical activity, and separation. Following a 21-day period, the treated group showed a statistically significant reduction in cortisol, glucose, adrenaline, IL-6, lipids, creatinine, aspartate aminotransferase, and alanine aminotransferase (81). | system instruction: [Using only the information in the provided context give your answer in bullet points.]
question: [What were all of the findings on the non-human test subjects?]
context block: [Because the brain is particularly vulnerable to oxidative stress, and anxiety disorders are characterized by a decrease in protective antioxidants as well as an increase in oxidative damage, treatments that protect against oxidative stress are desirable (54, 55). Multiple suggested mechanisms have been suggested to explain how oxidative stress plays a role in brain disorders. Oxidative stress can act as a trigger for these disorders and can also be a result of neuroinflammation, which has also been linked to brain-related disorders (56). Figure 3 shows how the components of ashwagandha can affect cell signaling pathways and the production of inflammatory mediators. Inflammation in peripheral tissues may directly play a role in increasing neuroinflammation and oxidative stress in the brain (56). Although there is evidence linking both oxidative stress and inflammation to brain-related disorders, it has not been conclusively proven which one causes the other or vice versa (57). Studies on animals exploring the effects of WS on anxiety have revealed a strong link between improvements in anxiety-related behavior and the amelioration of oxidative stress and inflammatory indicators.
The root extract and leaf extract of WS, although not clearly described, were able to increase the levels of catalase activity and reduced glutathione (GSH) in the brain, and also lower the levels of lipid peroxidation in a mouse model of sleep deprivation and a zebrafish model of neurotoxicity induced by benzo[a]pyrene (58, 59). Furthermore, WS demonstrated the ability to decrease nitrite levels in the mouse model and also lower protein carbonylation in the zebrafish model (58, 59). In an experiment using rats to simulate an ischemic stroke, a uniform hydroalcoholic extract derived from WS roots effectively decreased the levels of lipid peroxidation and enhanced antioxidant function in the brain (60). ASH-WEX, an aqueous leaf extract, was shown to decrease pro-inflammatory cytokines, specifically TNFα and IL-6, both in the peripheral and central nervous systems, in animal models of neuroinflammation and sleep deprivation (61). ASH-WEX produced a significant reduction in indicators of reactive gliosis, such as GFAP, as well as neuroinflammation, such as NOX2, iNOS, and COX2. Furthermore, it successfully regulated various inflammatory pathways and reduced cellular death in the brain (62). In a rat experiment examining the effects of a high fat diet on obesity, the use of WS dry leaf powder significantly lessened the expression of pro-inflammatory cytokines both in the body and brain, decreased indicators of reactive gliosis and neuroinflammation, modulated the nuclear factor NF-kappa-B (NF-κB) pathway, and lowered cell death (63).
Supplementation with ashwagandha has been shown to lower C-reactive protein (CRP) activity (64), one of the most important indicators of inflammation. Furthermore, other studies have shown that ashwagandha exerts a number of anti-inflammatory effects on chronic inflammation-mediated diseases (65), particularly rheumatoid arthritis (66), inflammatory bowel disorder (67), and systemic lupus erythematosus (68). These studies suggest that ashwagandha may be a useful tool for reducing the cytokine storm. Withanolides, particularly Withaferin A, are responsible for the majority of ashwagandha’s anti-inflammatory effects, according to several reports (69, 70). Ashwagandha may work by interacting with components of the proinflammatory cell signaling pathway, such as NF-κB, signaling kinases, HSP90, Nrf2, and the inflammasome complex, even though the mechanisms behind the anti-inflammatory effect of withanolides are not fully understood (71). Since the NF-κB transcription factor family is implicated in a number of chronic disorders caused by inflammation, individuals with high NF-κB levels may benefit from therapeutic targeting of NF-κB. In this situation, ashwagandha has the ability to inhibit and mediate the activity of the NF-κB pathway (72). It is thought that strong protein kinase inhibitor activity is necessary for ashwagandha to function. Ashwagandha has the ability to inhibit the signaling cascades of protein kinases, which are essential in inflammatory pathways (72). Furthermore, it appears that kinase inhibition takes place when nitric oxide synthesis is inhibited, which also benefits the inflammatory process. Another possible explanation for ashwagandha’s anti-inflammatory properties is the downregulation or destabilization of HSP activity, which is implicated in regulatory kinase pathways. As previously mentioned, ashwagandha regulates Nrf2 to moderate oxidative stress (34, 73). Nrf2 activation may account for ashwagandha’s anti-inflammatory properties, as oxidative stress frequently takes place in sites of inflammation and is thought to be one cause of chronic inflammation (74). Finally, by blocking inflammasomes, cytokines, and other multiprotein pro-inflammatory complexes, ashwagandha may lessen inflammation (7
According to some reports, ashwagandha is an adaptogen that, due to its antioxidant properties, boosts immunity, helps the body respond to stress more effectively, increases resilience, and fights oxidative stress and cellular damage (76, 77). It has been demonstrated that the bioactive C28-steroidal lactones present in WS leaves possess neuroprotective, anxiolytic, antioxidant, and anti-inflammatory properties. Moreover, Withanoside IV and its metabolite sominone, found in WS roots, have been shown to promote synaptogenesis and neuronal outgrowth. Furthermore, it has been demonstrated that the herb inhibits acetylcholinesterase and protects rats from cognitive decline (78). One study evaluated the effects of ashwagandha root extract 300 mg twice a day in humans with moderate cognitive impairment (79). After 8 weeks, the treatment group outperformed the placebo group in tests measuring immediate and general memory, information-processing speed, executive function, and attention. However, the benefits on working memory and visuospatial processing were not definitive because there was little difference between the two groups’ performance on these tasks. Another study examined the effect of ashwagandha extract on cognitive impairment in individuals with bipolar disorder (80). Tests were conducted at baseline and after the intervention, with rats randomly allocated to receive 500 mg/day of ashwagandha or a placebo for 8 weeks. When compared to the placebo, subjects in the treatment group showed significantly better results on the Flanker Test (neutral mean reaction time), the Penn Emotional Acuity Test (mean social cognition response rating), and the Auditory Digit Span (mean digit span backward). These findings suggested that ashwagandha extract may safely enhance cognitive function in bipolar disorder patients, including verbal working memory, response time, and social cognition response. Another study was carried out using ashwagandha root extract on a group of horses. The animals were subjected to a variety of stressors, including loud noises, prolonged physical activity, and separation. Following a 21-day period, the treated group showed a statistically significant reduction in cortisol, glucose, adrenaline, IL-6, lipids, creatinine, aspartate aminotransferase, and alanine aminotransferase (81).] |
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | What are the requirements for providing expert evidence? Provide a step by step process of how an expert would present evidence in a court without referencing any specifics such as the type of expert or their specific field of expertise. | Obligations of experts in New Zealand
An “expert” is anyone with specialised knowledge or skill based on training, study or experience – it is not necessary for an expert to have formal qualifications (though, they often do).[1] In New Zealand, expert witnesses are not advocates — they have an overriding duty of impartiality to assist the Court.[2] The duty of impartiality is a fundamental tenet of giving expert evidence in New Zealand, in direct contrast to jurisdictions such as the United States, where experts are specifically selected as part of the advocacy team.
The requirements of expert evidence in New Zealand are set out in the Evidence Act 2006 and the High Court Rules 2016, which sets out the prescribed Code of Conduct (the Code).[3] Expert evidence is only admissible if it is of “substantial help” to the judge,[4] and the expert’s conduct complies with the prescriptive requirements of the Code:[5]
experts must read and agree to comply with the Code;
experts must state their qualifications and confirm that the specific issues they address are properly within their expertise;
experts must state all relevant facts, assumptions, reasoning, literature (or other material), testing or investigations relied on in reaching their opinion; and
experts must appropriately qualify their opinions as necessary.
In common law jurisdictions such as New Zealand, independent experts are engaged by the parties to the dispute. The Court may appoint its own independent expert, but this person must be agreed upon by the parties (if possible), or otherwise selected from experts named by the parties.[6]
Once appointed, experts present evidence to the Court and are then examined by legal counsel, reflecting the adversarial nature of the common law system used in New Zealand. The adversarial court system will be immediately unfamiliar territory for European experts. For example, it is far more common in Germany for experts to be appointed by and primarily examined by the presiding judge.[7] Experts engaged by the parties are considered advocates, and their opinion will be considered as submissions (rather than evidence) by the Court.[8]
In New Zealand, it is not a requirement that experts will disclose the scope of their engagements or the questions that have been put to them in their evidence. Nevertheless, we generally encourage this, as the nature of the questions put to an expert can be used as a measure of the expert’s independence (by checking if that the questions put to the witness are impartial and do not lead the witness to a particular answer). This is a mandatory requirement in England, Wales and Australia.
If both parties have engaged experts to address the same issue of fact, the Court may direct that the witnesses confer and attempt to reach agreement on matters within their field of expertise.[9] This narrows the scope of the disputed issues, allowing for more efficient use of Court time. Consistent with their overriding duty to assist the Court, expert witnesses are obligated to comply with these directions.
It is common practice for these obligations to be applied to arbitration proceedings based in New Zealand. In adjudication proceedings, where only a written “witness statement” is required, it is typical for experts to refer to the Code when providing evidence. | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
What are the requirements for providing expert evidence? Provide a step by step process of how an expert would present evidence in a court without referencing any specifics such as the type of expert or their specific field of expertise.
<TEXT>
Obligations of experts in New Zealand
An “expert” is anyone with specialised knowledge or skill based on training, study or experience – it is not necessary for an expert to have formal qualifications (though, they often do).[1] In New Zealand, expert witnesses are not advocates — they have an overriding duty of impartiality to assist the Court.[2] The duty of impartiality is a fundamental tenet of giving expert evidence in New Zealand, in direct contrast to jurisdictions such as the United States, where experts are specifically selected as part of the advocacy team.
The requirements of expert evidence in New Zealand are set out in the Evidence Act 2006 and the High Court Rules 2016, which sets out the prescribed Code of Conduct (the Code).[3] Expert evidence is only admissible if it is of “substantial help” to the judge,[4] and the expert’s conduct complies with the prescriptive requirements of the Code:[5]
experts must read and agree to comply with the Code;
experts must state their qualifications and confirm that the specific issues they address are properly within their expertise;
experts must state all relevant facts, assumptions, reasoning, literature (or other material), testing or investigations relied on in reaching their opinion; and
experts must appropriately qualify their opinions as necessary.
In common law jurisdictions such as New Zealand, independent experts are engaged by the parties to the dispute. The Court may appoint its own independent expert, but this person must be agreed upon by the parties (if possible), or otherwise selected from experts named by the parties.[6]
Once appointed, experts present evidence to the Court and are then examined by legal counsel, reflecting the adversarial nature of the common law system used in New Zealand. The adversarial court system will be immediately unfamiliar territory for European experts. For example, it is far more common in Germany for experts to be appointed by and primarily examined by the presiding judge.[7] Experts engaged by the parties are considered advocates, and their opinion will be considered as submissions (rather than evidence) by the Court.[8]
In New Zealand, it is not a requirement that experts will disclose the scope of their engagements or the questions that have been put to them in their evidence. Nevertheless, we generally encourage this, as the nature of the questions put to an expert can be used as a measure of the expert’s independence (by checking if that the questions put to the witness are impartial and do not lead the witness to a particular answer). This is a mandatory requirement in England, Wales and Australia.
If both parties have engaged experts to address the same issue of fact, the Court may direct that the witnesses confer and attempt to reach agreement on matters within their field of expertise.[9] This narrows the scope of the disputed issues, allowing for more efficient use of Court time. Consistent with their overriding duty to assist the Court, expert witnesses are obligated to comply with these directions.
It is common practice for these obligations to be applied to arbitration proceedings based in New Zealand. In adjudication proceedings, where only a written “witness statement” is required, it is typical for experts to refer to the Code when providing evidence.
https://www.minterellison.co.nz/insights/calling-international-experts-an-introduction-to-giving-expert-evidence-in-new-zealand |
You base all answers on the provided context block. Use only the information in the context block to answer the user's question. | How do the two parts of the nest interface? | Abstract
The Nest Thermostat is a smart home automation device that aims to learn a user’s heating
and cooling habits to help optimize scheduling and power usage. With its debut in 2011, Nest
has proven to be such a success that Google spent $3.2B to acquire the company. However,
the complexity of the infrastructure in the Nest Thermostat provides a breeding ground for
security vulnerabilities similar to those found in other computer systems. To mitigate this
issue, Nest signs firmware updates sent to the device, but the hardware infrastructure lacks
proper protection, allowing attackers to install malicious software into the unit. Through a USB
connection, we demonstrate how the firmware verification done by the Nest software stack can
be bypassed, providing the means to completely alter the behavior of the unit. The compromised
Nest Thermostat will then act as a beachhead to attack other nodes within the local network.
Also, any information stored within the unit is now available to the attacker, who no longer has
to have physical access to the device. Finally, we present a solution to smart device architects
and manufacturers aiding the development and deployment of a secure hardware platform.
1 Introduction
The concept of Internet of Things (IoT) and wearable devices has been widely accepted in the last
few years with an increasing amount of smart devices being designed, fabricated, and deployed.
It is estimated that there will be more than 50 billion network connected devices by 2020, the
majority of which will be IoT and wearable devices1
. The once science fiction scenes that showed
our refrigerators ordering us milk and our washing machines messaging us when laundry needs to
be done are now reality.
The convenience provided by networked smart devices also breeds security and privacy concerns.
Nest founder Tony Fadell claimed in an interview, “We have bank-level security, we encrypt updates,
and we have an internal hacker team testing the security ... [the Nest Thermostat] will never take off
if people don’t trust it.” However, a deep look into the current IoT and wearable device design flow
revealed to us that most of the current security considerations, if any, are put on the application
and network level. That is, designers often treat IoT and wearable devices as standard networked
devices and try to apply the security protections developed for regular, everyday use computing
devices. It is rare to find any work done beyond firmware authentication or encryption. Most IoT
and wearable devices collect usage information and other data and send it to a service provider,
leading to privacy concerns. Full disclosure of the what is collected is rare, and anything that is
actually published is often hidden in the legalese that is the privacy policies and terms of services
of the unit
In the rest of the paper, we will introduce our work identifying a security vulnerability in the
Nest Thermostat, targeting the hardware infrastructure and the hardware-software boundary. We
will demonstrate that attackers who understand the hardware can change the boot process of the
device in order to upload malicious firmware, effectively bypassing the firmware update verification
done by the software. From a positive angle, however, we argue that this same vulnerability offers
legitimate users a way to defend themselves against the collection of data thus protecting their
privacy and to extend the functionality of the device.
2 The Nest Thermostat
The Nest Thermostat is a smart device designed to control a central air conditioning unit based
on heuristics and learned behavior. Coupled with a WiFi module, the Nest Thermostat is able
connect to the user’s home or office network and interface with the Nest Cloud, thereby allowing
for remote control of the unit. It also exhibits a ZigBee module for communication with other Nest
devices, but has remained dormant for firmware versions up to the now current 4.2.x series.
The Nest Thermostat runs a Linux kernel, coupled with some GNU userland tools, Busybox,
other miscellaneous utilities supporting a proprietary stack by Nest Labs. To remain GPL compliant, the modified source code used within the device has been published and is available for download from Nest Lab’s Open Source Compliance page at https://nest.com/legal/compliance,
with the notable exception of the C library. A toolchain to build these sources is not provided
either.
2.1 User Privacy
The Nest Thermostat will collect usage statistics of the device and environmental data and thus
“learn” the user’s behavior. This is stored within the unit and also uploaded to the Nest Cloud
once the thermostat connects to a network. Not only usage statistics are uploaded, but also system
logs and Nest software logs, which contains information such as the user’s Zip Code, device settings,
HVAC settings, and wiring configuration. Forensic analysis of the device also yields that the Nest
Thermostat has code to prompt the user for information about their place of residence or office.
Reports indicate that Nest plans to share this information with energy providers in order to generate
energy more efficiently.
2.2 Architecture Overview
As a device itself, the Nest Thermostat is divided into two components, a backplate which directly
interfaces with the air conditioning unit and a front panel with a screen, a button, a rotary dial
and a motion sensor. The operating system runs on the front plate.
The backplate contains a STMicroelectronics low power ARM Cortex-M3 microcontroller with
128KiB of flash storage and 16KiB of RAM, coupled with a few driver circuits and an SHT20
temperature and humidity sensor. The backplate communicates with the front plate using a UART.
The front panel offers a Texas Instruments (TI) Sitara AM3703 microprocessor, 64MiB of
SDRAM, 2Gibit (256MiB) of ECC NAND flash, a ZigBee module and a WiFi module supporting 802.11 b/g/n. The board for this device also offers a Texas Instruments TPS65921B power
management module with HS USB capabilities. This part of the Nest has been the target of our
research so far, as it contains the most hardware and handles all user data and input.
2.3 Boot Process
Upon normal power on conditions, the Sitara AM3703 starts to execute the code in its internal
ROM. This code initializes the most basic peripherals, including the General Purpose Memory
Controller (GPMC). It then looks for the first stage bootloader, x-loader, and places it into SRAM.
Once this operation finishes, the ROM code jumps into x-loader, which proceeds to initialize other
peripherals and SDRAM. Afterwards, it copies the second stage bootloader, u-boot, into SDRAM
and proceeds to execute it. At this point, u-boot proceeds to initialize the remaining subsystems
and executes the uImage in NAND with the configured environment. The system finishes booting
from NAND as initialization scripts are executed, services are run, culminating with the loading
of the Nest Thermostat proprietary software stack. | You base all answers on the provided context block. Use only the information in the context block to answer the user's question.
How do the two parts of the nest interface?
Abstract
The Nest Thermostat is a smart home automation device that aims to learn a user’s heating
and cooling habits to help optimize scheduling and power usage. With its debut in 2011, Nest
has proven to be such a success that Google spent $3.2B to acquire the company. However,
the complexity of the infrastructure in the Nest Thermostat provides a breeding ground for
security vulnerabilities similar to those found in other computer systems. To mitigate this
issue, Nest signs firmware updates sent to the device, but the hardware infrastructure lacks
proper protection, allowing attackers to install malicious software into the unit. Through a USB
connection, we demonstrate how the firmware verification done by the Nest software stack can
be bypassed, providing the means to completely alter the behavior of the unit. The compromised
Nest Thermostat will then act as a beachhead to attack other nodes within the local network.
Also, any information stored within the unit is now available to the attacker, who no longer has
to have physical access to the device. Finally, we present a solution to smart device architects
and manufacturers aiding the development and deployment of a secure hardware platform.
1 Introduction
The concept of Internet of Things (IoT) and wearable devices has been widely accepted in the last
few years with an increasing amount of smart devices being designed, fabricated, and deployed.
It is estimated that there will be more than 50 billion network connected devices by 2020, the
majority of which will be IoT and wearable devices1
. The once science fiction scenes that showed
our refrigerators ordering us milk and our washing machines messaging us when laundry needs to
be done are now reality.
The convenience provided by networked smart devices also breeds security and privacy concerns.
Nest founder Tony Fadell claimed in an interview, “We have bank-level security, we encrypt updates,
and we have an internal hacker team testing the security ... [the Nest Thermostat] will never take off
if people don’t trust it.” However, a deep look into the current IoT and wearable device design flow
revealed to us that most of the current security considerations, if any, are put on the application
and network level. That is, designers often treat IoT and wearable devices as standard networked
devices and try to apply the security protections developed for regular, everyday use computing
devices. It is rare to find any work done beyond firmware authentication or encryption. Most IoT
and wearable devices collect usage information and other data and send it to a service provider,
leading to privacy concerns. Full disclosure of the what is collected is rare, and anything that is
actually published is often hidden in the legalese that is the privacy policies and terms of services
of the unit
In the rest of the paper, we will introduce our work identifying a security vulnerability in the
Nest Thermostat, targeting the hardware infrastructure and the hardware-software boundary. We
will demonstrate that attackers who understand the hardware can change the boot process of the
device in order to upload malicious firmware, effectively bypassing the firmware update verification
done by the software. From a positive angle, however, we argue that this same vulnerability offers
legitimate users a way to defend themselves against the collection of data thus protecting their
privacy and to extend the functionality of the device.
2 The Nest Thermostat
The Nest Thermostat is a smart device designed to control a central air conditioning unit based
on heuristics and learned behavior. Coupled with a WiFi module, the Nest Thermostat is able
connect to the user’s home or office network and interface with the Nest Cloud, thereby allowing
for remote control of the unit. It also exhibits a ZigBee module for communication with other Nest
devices, but has remained dormant for firmware versions up to the now current 4.2.x series.
The Nest Thermostat runs a Linux kernel, coupled with some GNU userland tools, Busybox,
other miscellaneous utilities supporting a proprietary stack by Nest Labs. To remain GPL compliant, the modified source code used within the device has been published and is available for download from Nest Lab’s Open Source Compliance page at https://nest.com/legal/compliance,
with the notable exception of the C library. A toolchain to build these sources is not provided
either.
2.1 User Privacy
The Nest Thermostat will collect usage statistics of the device and environmental data and thus
“learn” the user’s behavior. This is stored within the unit and also uploaded to the Nest Cloud
once the thermostat connects to a network. Not only usage statistics are uploaded, but also system
logs and Nest software logs, which contains information such as the user’s Zip Code, device settings,
HVAC settings, and wiring configuration. Forensic analysis of the device also yields that the Nest
Thermostat has code to prompt the user for information about their place of residence or office.
Reports indicate that Nest plans to share this information with energy providers in order to generate
energy more efficiently.
2.2 Architecture Overview
As a device itself, the Nest Thermostat is divided into two components, a backplate which directly
interfaces with the air conditioning unit and a front panel with a screen, a button, a rotary dial
and a motion sensor. The operating system runs on the front plate.
The backplate contains a STMicroelectronics low power ARM Cortex-M3 microcontroller with
128KiB of flash storage and 16KiB of RAM, coupled with a few driver circuits and an SHT20
temperature and humidity sensor. The backplate communicates with the front plate using a UART.
The front panel offers a Texas Instruments (TI) Sitara AM3703 microprocessor, 64MiB of
SDRAM, 2Gibit (256MiB) of ECC NAND flash, a ZigBee module and a WiFi module supporting 802.11 b/g/n. The board for this device also offers a Texas Instruments TPS65921B power
management module with HS USB capabilities. This part of the Nest has been the target of our
research so far, as it contains the most hardware and handles all user data and input.
2.3 Boot Process
Upon normal power on conditions, the Sitara AM3703 starts to execute the code in its internal
ROM. This code initializes the most basic peripherals, including the General Purpose Memory
Controller (GPMC). It then looks for the first stage bootloader, x-loader, and places it into SRAM.
Once this operation finishes, the ROM code jumps into x-loader, which proceeds to initialize other
peripherals and SDRAM. Afterwards, it copies the second stage bootloader, u-boot, into SDRAM
and proceeds to execute it. At this point, u-boot proceeds to initialize the remaining subsystems
and executes the uImage in NAND with the configured environment. The system finishes booting
from NAND as initialization scripts are executed, services are run, culminating with the loading
of the Nest Thermostat proprietary software stack. |
You can only respond using information from the prompt. Do not rely on any internal information. Give your answer in the form of a bullet point list. List at least 3 bullet points. | Summarize the contents of each US code described in the text. | The Supreme Court has held that “officers of the State ... performing official duties,” including public
safety officials, act “under color of ... law” for purposes of Section 242. As DOJ has explained, law
enforcement officers may violate Section 242 through “excessive force, sexual assault, intentional false
arrests, theft, or the intentional fabrication of evidence resulting in a loss of liberty to another.” DOJ
enforces Sections 241 and 242 by bringing criminal charges against officers accused of violating those
statutes. People who believe their rights have been infringed may report such violations to DOJ, but
Sections 241 and 242 do not authorize suits by individuals. If DOJ elects to pursue criminal charges under
Section 241 or 242, it faces a high standard of proof. Under the cases Screws v. United States and United
States v. Guest, the prosecution must prove the defendant had “a specific intent to deprive a person of a
federal right made definite by decision or other rule of law.” Specific intent means that the defendant must
not intend only to, for example, assault a victim but must also intend to violate a federal right by doing so.
This results in what some view as a significant hurdle to bringing Section 241 and 242 claims.
DOJ brought charges under Section 242 against the officers involved in the deaths of George Floyd and
Breonna Taylor. The officers involved in Mr. Floyd’s killing pled guilty or were convicted by a jury. As of
February 2023, charges against the officers involved in Ms. Taylor’s death remain pending.
DOJ Civil Enforcement
Another section of the U.S. Code, 34 U.S.C. § 12601 (Section 12601, formerly codified at 42 U.S.C.
§ 14141) renders it “unlawful for any governmental authority, or any agent thereof, ... to engage in a
pattern or practice of conduct by law enforcement officers or by officials ... that deprives persons of
rights, privileges, or immunities secured or protected by the Constitution or laws of the United States.”
Another CRS Legal Sidebar discusses this statute in more detail. According to DOJ, potential violations
of the provision include “excessive force, discriminatory harassment, false arrests, coercive sexual
conduct, and unlawful stops, searches or arrests.” DOJ enforces the provision by filing civil complaints
against allegedly offending law enforcement agencies. The statute does not create a private right for
individuals harmed by violations to sue. Moreover, because the law applies only to a “pattern or practice
of conduct,” it cannot remedy isolated instances of misconduct. Finally, the statute does not provide for
monetary penalties. If DOJ successfully sues under the provision, it may “obtain appropriate equitable
and declaratory relief to eliminate the pattern or practice.”
Private Civil Rights Litigation
Federal law also allows individuals to seek civil redress for violations of their legal rights. The applicable
statute, 42 U.S.C. § 1983 (Section 1983), provides in relevant part:
Every person who, under color of any statute, ordinance, regulation, custom, or usage, of any State
. . . subjects, or causes to be subjected, any citizen of the United States or other person within the
jurisdiction thereof to the deprivation of any rights, privileges, or immunities secured by the
Constitution and laws, shall be liable to the party injured[.]
Unlike the other statutory provisions discussed above, Section 1983 creates a private right of action,
meaning that anyone suffering a covered deprivation of rights may sue the persons responsible. Moreover,
unlike Sections 241 and 242, courts have interpreted Section 1983 not to contain a specific intent
requirement, making it easier for plaintiffs to prove violations of the statute. | The Supreme Court has held that “officers of the State ... performing official duties,” including public
safety officials, act “under color of ... law” for purposes of Section 242. As DOJ has explained, law
enforcement officers may violate Section 242 through “excessive force, sexual assault, intentional false
arrests, theft, or the intentional fabrication of evidence resulting in a loss of liberty to another.” DOJ
enforces Sections 241 and 242 by bringing criminal charges against officers accused of violating those
statutes. People who believe their rights have been infringed may report such violations to DOJ, but
Sections 241 and 242 do not authorize suits by individuals. If DOJ elects to pursue criminal charges under
Section 241 or 242, it faces a high standard of proof. Under the cases Screws v. United States and United
States v. Guest, the prosecution must prove the defendant had “a specific intent to deprive a person of a
federal right made definite by decision or other rule of law.” Specific intent means that the defendant must
not intend only to, for example, assault a victim but must also intend to violate a federal right by doing so.
This results in what some view as a significant hurdle to bringing Section 241 and 242 claims.
DOJ brought charges under Section 242 against the officers involved in the deaths of George Floyd and
Breonna Taylor. The officers involved in Mr. Floyd’s killing pled guilty or were convicted by a jury. As of
February 2023, charges against the officers involved in Ms. Taylor’s death remain pending.
DOJ Civil Enforcement
Another section of the U.S. Code, 34 U.S.C. § 12601 (Section 12601, formerly codified at 42 U.S.C.
§ 14141) renders it “unlawful for any governmental authority, or any agent thereof, ... to engage in a
pattern or practice of conduct by law enforcement officers or by officials ... that deprives persons of
rights, privileges, or immunities secured or protected by the Constitution or laws of the United States.”
Another CRS Legal Sidebar discusses this statute in more detail. According to DOJ, potential violations
of the provision include “excessive force, discriminatory harassment, false arrests, coercive sexual
conduct, and unlawful stops, searches or arrests.” DOJ enforces the provision by filing civil complaints
against allegedly offending law enforcement agencies. The statute does not create a private right for
individuals harmed by violations to sue. Moreover, because the law applies only to a “pattern or practice
of conduct,” it cannot remedy isolated instances of misconduct. Finally, the statute does not provide for
monetary penalties. If DOJ successfully sues under the provision, it may “obtain appropriate equitable
and declaratory relief to eliminate the pattern or practice.”
Private Civil Rights Litigation
Federal law also allows individuals to seek civil redress for violations of their legal rights. The applicable
statute, 42 U.S.C. § 1983 (Section 1983), provides in relevant part:
Every person who, under color of any statute, ordinance, regulation, custom, or usage, of any State
. . . subjects, or causes to be subjected, any citizen of the United States or other person within the
jurisdiction thereof to the deprivation of any rights, privileges, or immunities secured by the
Constitution and laws, shall be liable to the party injured[.]
Unlike the other statutory provisions discussed above, Section 1983 creates a private right of action,
meaning that anyone suffering a covered deprivation of rights may sue the persons responsible. Moreover,
unlike Sections 241 and 242, courts have interpreted Section 1983 not to contain a specific intent
requirement, making it easier for plaintiffs to prove violations of the statute.
Summarize the contents of each US code described in the text.
You can only respond using information from the prompt. Do not rely on any internal information. Give your answer in the form of a bullet point list. List at least 3 bullet points. |
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | I just had a baby and he was diagnosed with a VSD and I am freaking out. What is the cause of this and how common is it? Can the hole close it on its own? What symptoms should I look for in my baby? If the hole is too large or doesn't close, what is the next step? | About Ventricular Septal Defect
Key points
A ventricular septal defect (pronounced ven·tric·u·lar sep·tal de·fect) is atype of congenital heart defect. Congenital means present at birth.
A ventricular septal defect is a hole in the wall (septum) that separates the two lower chambers (ventricles) of the heart.
Doctor listening to baby's heart
What it is
A ventricular septal defect (VSD) happens during pregnancy if the wall that forms between the two ventricles does not fully develop. This leaves a hole.
In babies without a heart defect, the right side of the heart pumps oxygen-poor blood from the heart to the lungs. The left side of the heart pumps oxygen-rich blood to the rest of the body. In babies with a VSD, blood flows from the left ventricle through the VSD to the right ventricle and into the lungs.
Keep Reading:
How the Heart Works
Occurrence
About 42 of every 10,000 babies in the United States are born with a VSD.1 This means that about 16,800 babies are born with a VSD each year.
Types
An infant with a VSD can have one or more holes in different places of the septum. There are several names for these holes. Some common locations and names are listed below:
Conoventricular Ventricular Septal Defect. In general, this is a hole where portions of the ventricular septum should meet just below the pulmonary and aortic valves.
Perimembranous Ventricular Septal Defect. This is a hole in the upper section of the ventricular septum.
Inlet Ventricular Septal Defect. This is a hole in the septum near to where the blood enters the ventricles through the tricuspid and mitral valves. This type of ventricular septal defect also might be part of another heart defect called an atrioventricular septal defect (AVSD).
Muscular Ventricular Septal Defect. This is a hole in the lower, muscular part of the ventricular septum. This is the most common type of ventricular septal defect.
View LargerDownload
Normal heart compared with a heart with VSD
A VSD is one or more holes in the wall between the ventricles.
Signs and symptoms
The size of the ventricular septal defect will influence what symptoms, if any, are present.
Signs of a ventricular septal defect might be present at birth or might not appear until well after birth. If the hole is small, it could close on its own. The baby might not show any signs of the defect. However, if the hole is large, the baby might have symptoms, including
Shortness of breath
Fast or heavy breathing
Sweating
Tiredness while feeding
Poor weight gain
Complications
A ventricular septal defect increases the amount of blood that flows through the lungs. This forces the heart and lungs to work harder. Overtime, if not repaired, a ventricular septal defect can increase the risk for other complications, including
Heart failure
High blood pressure in the lungs (called pulmonary hypertension)
Irregular heart rhythms (called arrhythmia)
Stroke
Risk factors
The causes of ventricular septal defects among most babies are unknown. Some babies have heart defects because of changes in their genes or chromosomes. A combination of genes and other risk factors may increase the risk for ventricular septal defects. These factors can include things in a mother's environment, what she eats or drinks, or the medicines she uses.
Diagnosis
A VSD is usually diagnosed after a baby is born. During a physical exam, a healthcare provider might hear a distinct whooshing sound, called a heart murmur. The size of the VSD will influence whether a healthcare provider hears a heart murmur during a physical exam.
If signs or symptoms are present, the healthcare provider might request one or more tests to confirm the diagnosis. The most common test is an echocardiogram, which is an ultrasound of the heart. An echocardiogram can show how large the hole is and how much blood is flowing through the hole.
A doctor has a stethoscope on a babies chest
A VSD is usually diagnosed after a baby is born.
Treatments
Treatments for a VSD depend on the size of the hole and the problems it might cause. Many VSDs are small and close on their own. If the hole is small and causing no symptoms, the doctor will check the infant regularly. This is to ensure there are no signs of heart failure and that the hole closes. If the hole doesn't close on its own or if it's large, further action might needed.
Depending on the size of the hole, symptoms, and general health of the child, the doctor might recommend either cardiac catheterization or open-heart surgery. These procedures will close the hole and restore normal blood flow. After surgery, the doctor will set up regular follow-up visits to make sure that the VSD remains closed.
Medicines
Some children will need medicines to help strengthen the heart muscle, lower their blood pressure, and help the body get rid of extra fluid.
Nutrition
Some babies with a ventricular septal defect become tired while feeding and do not eat enough to gain weight. To make sure babies have a healthy weight gain, a special high-calorie formula might be prescribed. Some babies become extremely tired while feeding and might need to be fed through a feeding tube.
What to expect long-term
Most children who have a VSD that closes (either on its own or with surgery) live healthy lives. | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
I just had a baby and he was diagnosed with a VSD and I am freaking out. What is the cause of this and how common is it? Can the hole close it on its own? What symptoms should I look for in my baby? If the hole is too large or doesn't close, what is the next step?
About Ventricular Septal Defect
Key points
A ventricular septal defect (pronounced ven·tric·u·lar sep·tal de·fect) is atype of congenital heart defect. Congenital means present at birth.
A ventricular septal defect is a hole in the wall (septum) that separates the two lower chambers (ventricles) of the heart.
Doctor listening to baby's heart
What it is
A ventricular septal defect (VSD) happens during pregnancy if the wall that forms between the two ventricles does not fully develop. This leaves a hole.
In babies without a heart defect, the right side of the heart pumps oxygen-poor blood from the heart to the lungs. The left side of the heart pumps oxygen-rich blood to the rest of the body. In babies with a VSD, blood flows from the left ventricle through the VSD to the right ventricle and into the lungs.
Keep Reading:
How the Heart Works
Occurrence
About 42 of every 10,000 babies in the United States are born with a VSD.1 This means that about 16,800 babies are born with a VSD each year.
Types
An infant with a VSD can have one or more holes in different places of the septum. There are several names for these holes. Some common locations and names are listed below:
Conoventricular Ventricular Septal Defect. In general, this is a hole where portions of the ventricular septum should meet just below the pulmonary and aortic valves.
Perimembranous Ventricular Septal Defect. This is a hole in the upper section of the ventricular septum.
Inlet Ventricular Septal Defect. This is a hole in the septum near to where the blood enters the ventricles through the tricuspid and mitral valves. This type of ventricular septal defect also might be part of another heart defect called an atrioventricular septal defect (AVSD).
Muscular Ventricular Septal Defect. This is a hole in the lower, muscular part of the ventricular septum. This is the most common type of ventricular septal defect.
View LargerDownload
Normal heart compared with a heart with VSD
A VSD is one or more holes in the wall between the ventricles.
Signs and symptoms
The size of the ventricular septal defect will influence what symptoms, if any, are present.
Signs of a ventricular septal defect might be present at birth or might not appear until well after birth. If the hole is small, it could close on its own. The baby might not show any signs of the defect. However, if the hole is large, the baby might have symptoms, including
Shortness of breath
Fast or heavy breathing
Sweating
Tiredness while feeding
Poor weight gain
Complications
A ventricular septal defect increases the amount of blood that flows through the lungs. This forces the heart and lungs to work harder. Overtime, if not repaired, a ventricular septal defect can increase the risk for other complications, including
Heart failure
High blood pressure in the lungs (called pulmonary hypertension)
Irregular heart rhythms (called arrhythmia)
Stroke
Risk factors
The causes of ventricular septal defects among most babies are unknown. Some babies have heart defects because of changes in their genes or chromosomes. A combination of genes and other risk factors may increase the risk for ventricular septal defects. These factors can include things in a mother's environment, what she eats or drinks, or the medicines she uses.
Diagnosis
A VSD is usually diagnosed after a baby is born. During a physical exam, a healthcare provider might hear a distinct whooshing sound, called a heart murmur. The size of the VSD will influence whether a healthcare provider hears a heart murmur during a physical exam.
If signs or symptoms are present, the healthcare provider might request one or more tests to confirm the diagnosis. The most common test is an echocardiogram, which is an ultrasound of the heart. An echocardiogram can show how large the hole is and how much blood is flowing through the hole.
A doctor has a stethoscope on a babies chest
A VSD is usually diagnosed after a baby is born.
Treatments
Treatments for a VSD depend on the size of the hole and the problems it might cause. Many VSDs are small and close on their own. If the hole is small and causing no symptoms, the doctor will check the infant regularly. This is to ensure there are no signs of heart failure and that the hole closes. If the hole doesn't close on its own or if it's large, further action might needed.
Depending on the size of the hole, symptoms, and general health of the child, the doctor might recommend either cardiac catheterization or open-heart surgery. These procedures will close the hole and restore normal blood flow. After surgery, the doctor will set up regular follow-up visits to make sure that the VSD remains closed.
Medicines
Some children will need medicines to help strengthen the heart muscle, lower their blood pressure, and help the body get rid of extra fluid.
Nutrition
Some babies with a ventricular septal defect become tired while feeding and do not eat enough to gain weight. To make sure babies have a healthy weight gain, a special high-calorie formula might be prescribed. Some babies become extremely tired while feeding and might need to be fed through a feeding tube.
What to expect long-term
Most children who have a VSD that closes (either on its own or with surgery) live healthy lives.
https://www.cdc.gov/heart-defects/about/ventricular-septal-defect.html |
You can only respond using the information in the context block. Do not list any similarities between the two fields. | What are the differences between public accounting and private/corporate? | University of Nebraska at Omaha
School of Accounting
THE ACCOUNTING PROFESSION
The accounting profession has been around for hundreds of years and is incorrectly perceived as nerdy, shy, quiet people who sit and “crunch” numbers. While accounting, by nature, does require a basic understanding of math, there is so much more to an accounting career than just numbers. A career in accounting is challenging, but with challenge comes rewards and opportunities. This profession can provide an exciting path to meet and work with people from different backgrounds as well as opportunities to travel the world. Because accountants offer a specific skill-set they are typically highly compensated through competitive salaries and, in a growing number of instances, allowed flexible work schedules.
The demand for talented, knowledgeable accountants has historically been high and is projected to increase over the next few years. The Bureau of Labor Statistics estimates that the accounting job field will grow an additional 139,900 jobs through the year 20261. The U.S. News and World Report ranks “Accountant” #3 on the list of “Best Business Jobs” because of the substantial salary amounts and the job security the profession provides. 2 Every business needs at least one accountant if not several accountants to help track and manage costs, assist with reporting, calculate federal, state and local tax liabilities, along with many other tasks. Also many individuals rely on accountants to help with tax planning, personal finance and wealth building. Because of this there are several different types of accountants that expand beyond the standard bookkeeper or tax preparer that most people associate with the term “accountant”.
AREAS OF ACCOUNTING
The accounting profession can be broken between public accountants and private accountants. Regardless if someone goes into public or private accounting, each professional must have certain skills above and beyond the education requirements. Most accountants must be detailed oriented, have the ability to organize and manage time effectively. Those that work in accounting, contrary to the typical accountant stereotype, must also have outstanding people skills and be able to communicate effectively. Additionally, each business and public accounting firm use different software when performing different job duties; therefore, accountants must be knowledgeable and comfortable with computers and have the capability to learn new systems quickly. All accountants are expected to be proficient in Microsoft Office especially Microsoft Excel. Also, successful accountants are analytical. They can think beyond the numbers and assist with making businesses more efficient. Below highlights the differences, lists the potential career paths and salary information for public vs. private accountants.
Public Accounting:
- Definition: accountants that serve businesses, governments, non-profits, and individuals by providing various accounting services. Those services include tax return preparation, financial statement preparation, financial statement audit or review, and various consulting related to business transactions.
- Requirements: These accountants typically have a bachelor’s degree in accounting. Most individuals that work in public accounting must be a Certified Public Accountant (CPA) and be licensed with the state.
- Why choose public accounting? It’s called public accounting because these accountants work with the public and must be able to communicate effortlessly. Public accountants work continuously with colleagues to complete projects as well as with clients to meet their specific needs. Those that choose public accounting are typically outgoing personalities that are also highly driven, goal oriented, and have a passion for serving others. It should be noted that usually it is easier for an accounting career to move from public accounting to private accounting (it’s challenging to go from private accounting to public without having to start over again in lower entry level positions).
- Public accounting has various options in-and-of itself. While public accounting firms offer a variety of services, the main services provided fall under tax preparation and audits of financial statements. These are very different and require different skill sets.
o Tax preparers must be knowledgeable in local, state and federal tax laws. However, they do more than just tax return preparation. Tax professionals work with businesses as well as individuals assisting with tax planning, financial planning, estate planning, and tax research. Tax preparers have steady work all through-out the year but work through a “busy season” from January through April 15th. These individuals mostly work out of the firm’s office but must be willing to do some travel to visit clients.
o Auditors must be knowledgeable in Generally Accepted Accounting Principles (GAAP). These auditors provide an opinion on whether or not a business’s financial statements have been prepared according to GAAP. Auditors typically must gather information and complete their audits on location in the client’s office which could take days, weeks or months depending on the size of the client and the service being provided. Therefore, an auditor must be willing to travel and work in close proximity with others. Auditors also have steady work through-out the year but have periodic busy seasons depending on the type of clients they have.
Private/Corporate Accounting:
- Definition: accountants that work inside a business, governmental entity or a non-profit that use financial accounting and/or managerial accounting knowledge to build financial statements and other reports to assist in evaluating various business decisions.
-Requirements: These accountants typically have a bachelor’s degree in accounting. Those that want to separate themselves from the rest will go on and receive a master’s degree and/or a professional accounting license.
- Below are some (not all) typical job titles that fall in the area of private/corporate accounting:
o Bookkeeper
o Billing Clerk
o Payroll Clerk
o Inventory Analyst
o Purchasing Manager
o Collections Clerk
o General Accountant
o Cost Accountant
o Tax Accountant
o Internal Auditor
o IT Auditor
o Controller
o Treasurer
o Chief Financial Officer
- Why choose private/corporate accounting? From the small sample of job titles above it can be noted that corporate accounting provides a vast amount of options for those that want to work inside a business or other organization. Many people are drawn to corporate accounting because they have more flexibility to design a career path that truly incorporates their interests whether it be cost management, tax compliance, account receivable management, etc. These accountants must still be able to work with others but in some positions, corporate accounting does provide a quieter, more secluded work space compared to public accounting. Additionally, many corporate accounting positions do not require a CPA license or a master’s degree making it easier to enter an accounting career (However, the CPA license, other professional accounting licenses and/or a Master’s degree will make it easier to move up in an accounting career.) | Question: What are the differences between public accounting and private/corporate?
System Instructions: You can only respond using the information in the context block. Do not list any similarities between the two fields.
Context:
University of Nebraska at Omaha
School of Accounting
THE ACCOUNTING PROFESSION
The accounting profession has been around for hundreds of years and is incorrectly perceived as nerdy, shy, quiet people who sit and “crunch” numbers. While accounting, by nature, does require a basic understanding of math, there is so much more to an accounting career than just numbers. A career in accounting is challenging, but with challenge comes rewards and opportunities. This profession can provide an exciting path to meet and work with people from different backgrounds as well as opportunities to travel the world. Because accountants offer a specific skill-set they are typically highly compensated through competitive salaries and, in a growing number of instances, allowed flexible work schedules.
The demand for talented, knowledgeable accountants has historically been high and is projected to increase over the next few years. The Bureau of Labor Statistics estimates that the accounting job field will grow an additional 139,900 jobs through the year 20261. The U.S. News and World Report ranks “Accountant” #3 on the list of “Best Business Jobs” because of the substantial salary amounts and the job security the profession provides. 2 Every business needs at least one accountant if not several accountants to help track and manage costs, assist with reporting, calculate federal, state and local tax liabilities, along with many other tasks. Also many individuals rely on accountants to help with tax planning, personal finance and wealth building. Because of this there are several different types of accountants that expand beyond the standard bookkeeper or tax preparer that most people associate with the term “accountant”.
AREAS OF ACCOUNTING
The accounting profession can be broken between public accountants and private accountants. Regardless if someone goes into public or private accounting, each professional must have certain skills above and beyond the education requirements. Most accountants must be detailed oriented, have the ability to organize and manage time effectively. Those that work in accounting, contrary to the typical accountant stereotype, must also have outstanding people skills and be able to communicate effectively. Additionally, each business and public accounting firm use different software when performing different job duties; therefore, accountants must be knowledgeable and comfortable with computers and have the capability to learn new systems quickly. All accountants are expected to be proficient in Microsoft Office especially Microsoft Excel. Also, successful accountants are analytical. They can think beyond the numbers and assist with making businesses more efficient. Below highlights the differences, lists the potential career paths and salary information for public vs. private accountants.
Public Accounting:
- Definition: accountants that serve businesses, governments, non-profits, and individuals by providing various accounting services. Those services include tax return preparation, financial statement preparation, financial statement audit or review, and various consulting related to business transactions.
- Requirements: These accountants typically have a bachelor’s degree in accounting. Most individuals that work in public accounting must be a Certified Public Accountant (CPA) and be licensed with the state.
- Why choose public accounting? It’s called public accounting because these accountants work with the public and must be able to communicate effortlessly. Public accountants work continuously with colleagues to complete projects as well as with clients to meet their specific needs. Those that choose public accounting are typically outgoing personalities that are also highly driven, goal oriented, and have a passion for serving others. It should be noted that usually it is easier for an accounting career to move from public accounting to private accounting (it’s challenging to go from private accounting to public without having to start over again in lower entry level positions).
- Public accounting has various options in-and-of itself. While public accounting firms offer a variety of services, the main services provided fall under tax preparation and audits of financial statements. These are very different and require different skill sets.
o Tax preparers must be knowledgeable in local, state and federal tax laws. However, they do more than just tax return preparation. Tax professionals work with businesses as well as individuals assisting with tax planning, financial planning, estate planning, and tax research. Tax preparers have steady work all through-out the year but work through a “busy season” from January through April 15th. These individuals mostly work out of the firm’s office but must be willing to do some travel to visit clients.
o Auditors must be knowledgeable in Generally Accepted Accounting Principles (GAAP). These auditors provide an opinion on whether or not a business’s financial statements have been prepared according to GAAP. Auditors typically must gather information and complete their audits on location in the client’s office which could take days, weeks or months depending on the size of the client and the service being provided. Therefore, an auditor must be willing to travel and work in close proximity with others. Auditors also have steady work through-out the year but have periodic busy seasons depending on the type of clients they have.
Private/Corporate Accounting:
- Definition: accountants that work inside a business, governmental entity or a non-profit that use financial accounting and/or managerial accounting knowledge to build financial statements and other reports to assist in evaluating various business decisions.
-Requirements: These accountants typically have a bachelor’s degree in accounting. Those that want to separate themselves from the rest will go on and receive a master’s degree and/or a professional accounting license.
- Below are some (not all) typical job titles that fall in the area of private/corporate accounting:
o Bookkeeper
o Billing Clerk
o Payroll Clerk
o Inventory Analyst
o Purchasing Manager
o Collections Clerk
o General Accountant
o Cost Accountant
o Tax Accountant
o Internal Auditor
o IT Auditor
o Controller
o Treasurer
o Chief Financial Officer
- Why choose private/corporate accounting? From the small sample of job titles above it can be noted that corporate accounting provides a vast amount of options for those that want to work inside a business or other organization. Many people are drawn to corporate accounting because they have more flexibility to design a career path that truly incorporates their interests whether it be cost management, tax compliance, account receivable management, etc. These accountants must still be able to work with others but in some positions, corporate accounting does provide a quieter, more secluded work space compared to public accounting. Additionally, many corporate accounting positions do not require a CPA license or a master’s degree making it easier to enter an accounting career (However, the CPA license, other professional accounting licenses and/or a Master’s degree will make it easier to move up in an accounting career.) |
Using ONLY the context block/prompt to guide your answer, provide a comprehensive comparison of the subjects mentioned in the question. Do not use any previous knowledge or outside sources to inform your answer. | How e-sports broadcasts compare with traditional sports broadcasts? | E-Sports Broadcasting
8
Introduction
Sportscasters on a Digital Field
Sitting at a desk underbright lights, two announcerstalk at afast clip. After a weekend
full of commentating, theirvoices are scratchyandfading, yet theirexcitement never wanes. No
one watchingcan see the two men, though a camerasitsjust afew feet infront ofthem. Instead,
the live audience andhome viewers see the Europeanchampions, Fnatic,going head to head
with SK Gaming on a virtualbattlefield. They're 55 minutes into an absoluteslugfest, the two
announcers'voices rise andfallwith the action ofthe game. Over the PA, the audience hears
that this game is mere seconds awayfrom ending. The SK team has Fnaticon the ropes after
brilliantlydefending their base. Fnatic'sstarplayer, Xpeke stays, attempting to win the game
singlehandedly.
The casters initiallydismiss the lastditch effort while the bulk of SK's team move to end
thegameontheothersideofthemap.However,thecamerastaysonXpeke whoisina
showdown with one memberofSK. NanosecondsawayfromdefeatXpeke dodgesa deadly
ability. The casters erupt in nearly unintelligible,frantic excitement as the 25,000 live attendees
atSpodek Arena in Katowice, Polandcheerat the sudden Fnaticvictory. Back in the realworld,
theentireFnaticteamjumpsawayfrom theircomputersandpileontoXpeke whilewe hear, "I
do not believe it! Xpeke's done it!" Over 643,000 online viewers around the world watch the
camerapan acrossthe SK team, stunnedin theirdefeat. From theirhome computers, these
viewers have just witnessed e-sports history.
E-Sports Broadcasting 9
The above scene unfolded at the 2014 Intel Extreme Masters World Championships in
League of Legends, a popular e-sports title. The solo maneuver that Xpeke performed on that
stage has since made its way into common LeagueofLegends vernacular, being invoked in any
match, casual or professional, where a player deftly ends a game singlehandedly. E-sports, which
encompasses many more titles than League of Legends, has become a cultural phenomenon of
sorts. People may wonder whether the whole scene is just a flash in the pan or something more
significant.
I begin this thesis in much the same way that I have begun many conversations over the
past two years: defining e-sports. In most of those conversations, I simply say "professional
video-gaming" and move on to other topics. Here, though, I fully elaborate on what e-sports
means. More than just professional gaming, e-sports is an entire industry created around
competitive gaming at all levels of play. An e-sport is not a just a sports video game like the title
might suggest, though some e-sports titles are sports video games. Instead, e-sports titles are
meticulously balanced, competitive, multiplayer games. Many games would fall into this
category, but it takes a community of people to take an e-sport to the level of the classics like
Counter Strike and Starcraft.
Such communities are core to the identity of e-sports. Indeed, this identity itself is an
oxymoronic collision of geek and jock culture; a mixture that media would have us believe acts
like oil and water. Even within e-sports communities lines are hazy and misdrawn. As Taylor
and Witkowski (2010) show in their study of a mega-LAN event, the e-sports scene is fraught
with identity issues not only from outside, but within as well. The jock-like first-person-shooter
(FPS) players competing at the same event as the nerdy, enigmatic World of Warcraft players
E-Sports Broadcasting
10
shows the conflicting, lived masculinities in e-sports. Players are unsure whether to act like
superstar athletes or tech-geeks. Can you be both?
The word e-sports alone evokes such a conflicting image. Electronic sports seems almost
paradoxical in nature. Have we moved beyond a physical match of skill and extended our
contests to avatars in a digital world? How can two players sitting at a desk be sporting? As e-
sports continue to grow not only as a segment of the gaming industry, but as a spectator affair,
we begin to see the 'sports' side of e-sports both challenged and invoked more frequently. In a
telling case, Twitter erupted after a Dota 2 tournament made an appearance on ESPN 2 in
2014. With $10 million at stake, many e-sports fans thought the event warranted the attention of
the all-sports network. Plenty of viewers took to social media to praise the move made by ESPN.
Others were shocked: "Espn2 is seriously airing an online gaming championship? Wtf man. This
is our society now. That is not a sport" (Hernandez 2014). The sports status of e-sports has been
both defended and attacked by journalists, academics, and fans alike.
The debate about the status of e-sports has been raging for many years. Witkowski's
piece, "Probing the Sportiness of E-Sports", presents both sides of the argument pulling from
games studies scholars and assessing e-sports on their terms. Ultimately though, I believe she
shelves the debate deftly when she states, "sport is a personal experience... as many a sporting
scholar has written before - if an individual considers the sporting activity they are engaged in to
be a sport... then it is a sport" (2009, 56). I do not wish to rehash this debate. I have no stake in
it. As Witkowski asserts, the attempt would be futile. Instead, I accept the role traditional sports
have played in the shaping of e-sports.
In fact, exploring the relationship between e-sports and their traditional counterpart drives
this work. In what follows, I argue that the sports media industrial complex has fundamentally
E-Sports Broadcasting
11
shaped the current e-sports industry. Beyond this grounding, e-sports broadcasters constantly
borrow from traditional televisual broadcasts, using models that they feel to be appropriate for
their medium. Regardless of whether e-sports qualify as sports or not, they are constantly
informed by sports broadcasting and follow a trajectory set out by traditional sports models.
This work comes about at in an interesting moment in e-sports history. E-sports
audiences have never been larger, Riot games boasted an impressive 27 million viewers for the
League ofLegends World Championship in 2014 while the 2015 Intel Extreme Masters world
championship saw over 1 million concurrent viewers across multiple live-streaming platforms
(Riot Games 2014; ESL 2014). An old classic, CounterStrike, has re-emerged, albeit in a new
package. The audience it continues to draw proves that some titles have staying power in this
fickle industry. At the same time, a new title, League ofLegends, consistently pulls in over
100,000 concurrent viewers for its weekly shows in the U.S. and E.U. As the League ofLegends
Championship Series moves into its fifth season, it has come to resemble a traditional sports
broadcast more than it does its fellow e-sports shows. A new addition in Season 5, a segment
called Prime Time League (PTL) is nearly indistinguishable from ESPN's Pardon the
Interruption (PTI) at a glance.
Figure 1-Left Image: Prime Time League; Right Image: Pardon the Interruption
E-Sports Broadcasting 12
Comparing these two images reveals the level of sports emulation found in e-sports broadcasting
today. From the stats and schedule ticker at the bottom of the screen to the show rundown along
the edge of the screen, an uninitiated viewer would have difficulty distinguishing between the e-
sports show and the traditional sports show.
A steady influx of television producers and directors are starting to shape an industry that
already has an identity crisis while still investigating how best to harness the new medium of
live-streaming. These assertions are not meant to give the impression that we stand on the edge
of wholly untouched land as pioneers in a new frontier. As shown in the e-sports literature
review to follow, the e-sports industry has a history of evoking the feeling of standing on a
precipice.
Organization
In the introduction, I first provide a brief history of e-sports and take note of the
directions e-sports scholarship has pursued. Following this review, I introduce the sports media
industrial complex to better situate e-sports broadcasting within the larger media landscape of
sports broadcasting: the focus of chapter 1.
The first chapter begins by looking at the long history of sports and media. By
introducing the full gamut of sports media, I am better able to investigate how e-sports
broadcasting stays in conversation with each of its predecessors. As evidenced in the reshuffling
of sports media through history, we can see that e-sports make use of all of these forms of media
while creating something new. During this chapter, I look to the transition moments in traditional
sports broadcasting as the foundation ofthe e-sports industry. Moments of tension and doubt
within the sports media industry as it shifted from one medium to another provide perfect lessons
E-Sports Broadcasting 13
to be learned by the e-sports industry as they struggle with some of the same issues found in the
reshuffling of media history. Indeed, while making use of the same media through journalism,
public relations, and audiovisual broadcasts, the e-sports industry constantly wrangles with the
use of the newly emerged medium of live-streaming. Television especially influences live-
streamed broadcasts, which e-sports broadcasts tend to approach with the same framework as
television.
Chapter two focuses on e-sportscasters, also known as shoutcasters. I begin the chapter
with a brief look at the history of shoutcasting. Considering that many of the early shoutcasters
pull solely from traditional sportscasters, understanding their influences is crucial in
understanding how e-sports has evolved in the way it has. As, I argue, the single most pointed
signaling of the sportiness in e-sports, these individuals have pushed the e-sports industry
towards a sports model. When first time viewers or listeners leave an e-sports broadcast with the
distinct feeling of a sports broadcast in their mind, it is the shoutcasters doing their job. They rely
heavily on conventions set by traditional sportscasters. Much like their predecessors when faced
with something new, shoutcasters borrowed what they could and innovated when there was
nothing to borrow. Chapter two also focuses on shoutcasters' formulation of their identity within
the e-sports industry as personalities, professionals, and record-keepers. Shoutcasters are just
now creating an identity separate from traditional sportscasting. Where veteran shoutcasters
relied primarily on traditional sports broadcasts, newer casters look instead to other shoutcasters.
These shoutcasters are reshaping their identity while attempting to fully embrace the new
medium of live-streaming.
The third and final chapter tackles the topic of economics in e-sports. As the history and
trajectory of sports broadcasting has profoundly affected the e-sports industry, many of the
E-Sports Broadcasting
14
economic models present in traditional sports bled into the e-sports industry as well. The e-sports
industry in the US and Europe has yet to be analyzed as such. Some work (Taylor 2012) has
focused on e-sports revenue streams including sponsorships, company models, and team
ownership, but overall, the subject remains underexplored. Dal Yong Jin's (2010) analysis of the
political economy of e-sports in South Korea offers a tool set for this chapter. While the South
Korean e-sports model spawned out of an extremely particular set of circumstances that cannot
be readily applied to the U.S. or E.U. e-sports scenes, Jin's investigation of the surrounding
economic systems surrounding e-sports translates well to my own investigation of the U.S. and
E.U. industries. As staggering prize pools continue to make headlines, it is easy to lose sight of
the economic system working behind the scenes to keep e-sports financially salable, or in some
cases not. The third chapter delves into traditional sports economics and their influence on the e-
sports industry. In some areas, the models translate perfectly. In others, e-sports has been unable
to tap into the same revenue generators as traditional sports. Unless some developments
significantly alter the e-sports industry, it may be more tenable to pursue other models instead of
the sports industry.
Methods
This thesis makes use of many qualitative methods including historical analysis,
interviews, and fieldwork. To grasp the significance and situation of e-sports broadcasting in its
current state fully, one must analyze the same developments in traditional sports broadcasting.
As one takes a deeper look into the past of the professional sporting industry, its influences on e-
sports become clear. A feedback loop has been created between the two. Historical analysis
offers a glimpse at key moments which defined the incredibly successful global sports industry.
E-Sports Broadcasting 15
Not only are similar situations appearing in e-sports, but e-sports pushes back into each of the
investigated forms of media. A few of the issues currently facing e-sports could be resolved
through following the path established by traditional sports, while other issues have been caused
because so much has been borrowed.
I also had the pleasure of conducting seven interviews with professional shoutcasters. I
limited the selection of shoutcasters to full-time professionals, rather than amateurs, to get an
insight into how these new professionals view their role within the industry. Roughly half the
participants are veteran shoutcasters of five or more years. The other half have joined the scene
more recently with one in particular having shoutcasted professionally for less than one year. As
these informants are a few of only dozens of professional shoutcasters in the world, I have
attempted to keep their identities anonymous. As professional personas, some of these casters
may benefit from being associated with this work, but I do not want to run the risk of potentially
linking these shoutcasters with their statements in the event that this information could somehow
affect the community's perception of the individual or potentially harm their prospects within the
e-sports industry. The conversations were all positive, but one can never truly assure their
informants that information they have provided in confidence will have no repercussion in any
foreseeable future. With these considerations in mind I decided before conducting the interviews
that the informants would remain anonymous.
Finally, I was also able to spend time working within the e-sports industry. My time spent
working for a prominent e-sports company profoundly shaped this thesis. Working alongside
industry professionals sparked countless conversations about the current climate of the e-sports
industry and possible futures. These conversations have both helped and challenged my thinking
about the e-sports industry. While I often refer to the e-sports industry or community as a
E-Sports Broadcasting 16
homogenous whole, the professionals who live within the space are not all of one mind and it
would be a mistake to present them that way. Within e-sports, there are many different games
and communities vying for viewers, players, and attention. What follows is my best attempt at
wrangling the many paths e-sports has started to follow.
E-sports Literature Review
E-sports is still a young industry and an even younger subject of critical inquiry. Most
entries into e-sports scholarship have emerged within the last five years. E-sports literature tends
to come from the much older tradition of games studies, but ties into many other fields including
the social sciences, cultural studies, economics, and law. Professional-gaming literature is a
veritable hotbed of potential research topics with more articles, theses, and dissertations
appearing every year. Much of the growing body of e-sports literature focuses on the
professionalization of gaming (Jin 2010; Mora and Heas 2005; Swalwell 2009; Taylor, Nicholas
2009; Taylor, T.L. 2012; Witkowski 2012). These histories offer much more than a rundown of
the events that created the e-sports industry. They also offer insight into our contemporary social
moment. The arrival of a professionalization of video gaming signals many significant
developments within both western and non-western culture. The global nature of e-sports and its
meshing together of complex and often conflicting identities continues to beg investigation.
E-sports literature primarily resides within the social sciences. Many cultural analyses in
e-sports (Chee and Smith 2005; Harper 2010 and 2014; Hinnant 2013; Swalwell 2009; Taylor
2011) have focused on the communities growing within different scenes. Todd Harper, for
instance, investigates the culture of competitive fighting games, a fascinating community which
stands both within and at odds with the rest of competitive gaming. Gender studies are also
E-Sports Broadcasting 17
becoming increasingly common within e-sports literature (Chen 2006; Crawford 2005; Leonard
2008; Taylor 2009 and 2011; Taylor and Witkowski 2010; Witkowski 2013). With the
fascinating and fraught formulation of masculinity within these spaces as well as the perceived
absence of femininity, gender studies are incredibly important within e-sports literature. Nicholas
Taylor (2011) offers insight into the ability of e-sports to create embodied performances of
masculinity at live events which spread through communities specific to certain titles or genres.
Taylor and Witkowski (2010) also show the conflicting versions of masculinity that appear in
different e-sports genres.
There has also been an increasing focus on e-sports as a spectator activity. Jeff Huang
and Gifford Cheung (2012) found in a study that many of the e-sports fans they investigated
prefer watching high-level play rather than playing a match themselves. Kaytou and Raissi
(2012) also investigate spectatorship in e-sports with a focus on how best to measure live-
streaming audiences. Others (Bowman 2013; Gommesen 2012; Kow and Young 2013) show that
the audience in e-sports has a profound effect on performance for the players, akin to a
traditional sports audience. These scholars also investigate the expertise apparent in e-sports
players that is passed on through spectating as often as practicing.
As the professional play of video games fascinates so many, e-sports literature has
understandably focused primarily on professional players. Notable exceptions include Jin (2012)
and Taylor (2012) who, while still heeding players, also investigate the surrounding factors
which allow for play at a professional level. Without these other factors, professional players
would not exist. It is from the tradition of these two authors, among others, that I base this work.
This thesis, like many of the works listed above seeks to better understand the phenomenon of e-
sports while analyzing a particular segment of the scene. With few investigations into the
E-Sports Broadcasting 18
broadcasting of e-sports, I hope to contribute to e-sports literature in a way that is both unique
and replicable to other systems found within the larger e-sports framework.
Sports Media Industrial Complex
As sport and media become increasingly intertwined, it becomes difficult to analyze one
without at least acknowledging the impact of the other. Pointing to the inextricable link between
sports and media, sports media scholar K. Lefever (2012) argues, "while sport provides valuable
content and audiences for media operators, the media is a revenue source and promotional tool
for sport." As such, the steady professionalization and, in turn, commercialization of sport relies
heavily on its media counterpart. The subsequent interdependence between media outlets,
sponsors, and sports leagues creates what is often referred to as the sports/media complex or
sports media industrial complex (Jhally 1989, Rowe 1999, Maguire 1991). Wenner (1989)
coined the neologism, MediaSport, to define the deeply rooted relationship between sports and
media. The two can hardly be considered separate anymore.
Stein (2013), a Comparative Media Studies alumni, building on the work of these earlier
scholars created a model which could be applied to new arrivals in the sports media landscape.
Thankfully, Stein provides a fairly replicable analysis of sports video games within the broader
sports media landscape. His investigation of the relationship between televisual sports video
games and sports media largely informs my own work. He notes an almost relentless stream of
advertising and commercialization rhetoric appearing in sports video games. Building on the
work of Wenner, Rowe, and Jhally, he argues that the commodification and capitalist trends
found in traditional sports broadcasting bleed into newer media such as video games. This steady
influx of advertising and commercialization can be found in e-sports as well.
E-Sports Broadcasting 19
As e-sports broadcasters gain more experience and access to more robust technology,
they have started to incorporate many of the same commercial opportunities Stein noticed in
sports video games. Segments of the broadcast are occasionally sponsored, or one might see a
sponsor make an appearance in an event's title such as the Intel Extreme Masters tournament.
Where Stein argues that sports video games incorporate these advertisements as a signifier of
their televisual legitimacy, I argue that e-sports broadcasters make use of the same strategies
because they are informed by earlier forms of sports media.
The steady commercialization found in e-sports reveals the influence that the sports
media industrial complex has had on the e-sports industry. In documenting the dynamics of the
sports media industrial complex, Jhally (1989) argues that sports are best viewed as
commodities. Jhally's model focuses on the sporting industry in the US prior to the emergence of
new media. More readily applicable to e-sports, Lefever's (2012) analysis of the sports media
complex within new media details a phenomenon which has upended the former relationships
between stakeholders in the sports media industrial complex. She claims that, "the sports/media
complex has somehow changed, allowing the different stakeholders to take up new roles"
(Lefever 2012, 13). The stakeholders, including sports franchises, sponsors, and media outlets,
have had to adapt to a new media landscape with new roles. These new roles are more transient
within the high-demand world of new media. Sports organizations and franchises have taken a
more active role in connecting with fans, media outlets have taken a larger interest in sports
franchises (often buying sports franchises if it is less expensive than purchasing media rights),
and sponsors have taken advantage of new, innovative ways to reach consumers (Lefever 2012,
21). According to sports scholars Haynes and Boyle (2003), television sports viewers are no
longer expected to just sit back and relax. Instead they are expected to follow their sport through
E-Sports Broadcasting 20
social media, forums, blogs, and other digital outlets. This new, active fan fits well within the e-
sports industry and live-streaming, but has changed the traditional sports media industrial
complex. Before delving too far into the role of traditional sports economic models on e-sports,
however, I will first situate live-streaming and e-sports within the larger sports media industrial
complex.
E-Sports Broadcasting
21
Chapter 1
Sports Media in Transition From Print to Live-Streaming
Every day, millions of Americans are catching up with the latest sports news through
print, radio, television, and online. Sports have saturated the entire spectrum of mass media in
the US. With the emergence of each form of mass media, sports coverage has been at the
forefront of adoption and innovation (Bryant and Holt 2006, 22). Each major medium shift in the
US has been accompanied by a massive reshuffling of the sports media landscape. Often, this
reshuffling opens a space for a particular sport to take up the new medium, create conventions,
and carve a path for others to follow. These sports were not spawned by mass media, but their
spike in popularity around the emergence of a new medium indicates very specific social
moments in the US. Early sports magazines and print coverage of sports focused primarily on
prize-fighting, radio ushered in the golden era of baseball, and television transformed football
into a titanic entertainment industry. The rise and stabilization of sports media are as much a
product of available technology as they are indicative of societal preoccupations of the time. If
sports and sports media are indicative of our social moment, then what can we glean from the
arrival of live-streaming and e-sports?
The co-evolution of sports and media is the coalescence of many factors including
changes in power structures, modes of production, and available technology. As Bryant and Holt
argue in their investigation of the history of sports and media, "[e]ach epoch of social evolution
has witnessed important sports-media developments that were affected by the evolving socio-
cultural environment" (2006, 22). In what follows, I trace the co-evolution of sports and media
with particular focus on the relationship between emerging mass media and the media ecology
E-Sports Broadcasting 22
surrounding that emergence. By documenting these moments of turbulence, I establish the
framework necessary to analyze live-streaming as a new medium with which e-sports has
emerged as an early adopter and convention creator. Live-streaming did not emerge
independently from its predecessors, but rather delivers on the preoccupations of our current
social moment. It has once again started a reshuffling of the roles of media within the sports
media complex. E-sports, while primarily viewed through live-streaming, relies on all of the
previous forms of media to varying degrees. With this framework in mind, I argue that the
feedback between live-streaming, e-sports, and traditional sports has spawned an industry which
roots itself in traditional sports media while still investigating the full potential of live-streaming.
I begin by briefly discussing sports media in antiquity with Thomas Scanlon's (2006)
piece on ancient Mediterranean sports and media. After this introduction to sports media, I move
to the US in the late eighteenth century with the emergence of the first sports-only publication,
the sports magazine, as well as early print news coverage of prize fighting during the rise of
industrialization and nationalism. The next section maps the push towards immediacy in sports
coverage and the rise of radio. On the heels of radio and the golden age of baseball, I discuss the
early issues with televised sport before the post-war era. Moving into the 1950s and 1960s, I
detail the transformation of football into a televisual sport accompanied by a very specific social
contingency. I then transition into an investigation of live-streaming and e-sports, particularly
how both are in conversation with sports media history.
Origins of Sports Media
As classicist Thomas Scanlon (2006) posits, there is no history of sports without its
media counterpart. Media in antiquity, he argues, "are a tool of society, a means of transmitting a
message, primarily one from the rulers to the ruled" (Scanlon 2006, 17). While his definition is
E-Sports Broadcasting 23
quite limited, Scanlon is correct in noting that media are inflected with the power structures of a
society. Sports as media were classically used by those with power to reinforce the hierarchy.
Sports events were "represented as a benevolent benefaction from the rich, noble, and
empowered to those marginalized" (Scanlon 2006, 18). This reinforcement of power structures
comes through not only in the production of sporting events, but also in the medium itself.
Scanlon suggests that the most powerful sports 'medium' in classical times was Roman
architecture. The massive circuses and arenas were meant to "provoke awe, admiration, and
obedience in the citizens" (Scanlon 2006, 18). Scanlon establishes that the predominant sports
medium in a given society correlates directly with their notions of power. Within the realm of
more dispersed authority such as the Ancient Greeks, sports media reflected the high value of an
individual and his merits. Depictions of athletics in Ancient Greek poetry and pottery, made by
and for the common people, focus on a particular athlete's prowess more than the event itself. On
the other hand, societies with incredibly rigid hierarchies and god-kings such as the Ancient
Egyptians and Persians, tend to represent sports as a demonstration of the ruler's power over
their people. Ancient Rome, with its centrally focused authority, used architecture to demonstrate
the power of the nobility as both benefactors and arbiters, diminishing the role of the athlete to
that of an entertainer. Moving into more recent history with media such as newspapers and radio,
Scanlon concludes that sports media became an amalgamation of both the Roman and Greek
styles: large spectacles with massive personalities.
E-Sports Broadcasting 24
Establishing a Media Landscape: Early Sports Media in America
The importance of the printing press on modem society cannot be overstated. While its
precise effects are still being debated', the affordances of the printing press allowed individuals
to produce and disseminate a massive amount of information far more efficiently than ever
before. With a massive rise in literacy rates and increased access to print brought about by the
printing press, the reading population of the world shifted (Eisenstein 1983). While early
readership was restricted to a very small subset of society, the printing press paved the way for
the coverage of more mundane topics such as sports. In their analysis of sports media in pre-
industrial America, sports media scholars Jennings Bryant and Andrea Holt point to two major
developments: first, the appearance of sports in newspapers as 'general news' and second the
creation of a completely sports-centered publication: the sports magazine (2006, 22). The advent
and success of sports magazines in the early nineteenth century stands as a marker for some of
the intellectual shifts of the industrial era. During this time we see a professionalization of sport
in the form of prize fighters. We also see a shift from sports as a local leisure activity to
something that one follows from a distance. Sports contests began to take on implications
beyond a mere matching of athletes.
Many sports magazines started out as independent, one-person operations that began
circulation in the 1820s and 1830s (Bryant and Holt 2006, 22). The Spiritof the Times, one of the
earliest iterations of the sports magazine, actually reached a circulation of over 100,000 readers
by the 1840s. The success of this initial sports-focused publication displays the roots of the
American sports media tradition. While they note the significance of sports magazines in the
overall climate of sports media in America, Bryant and Holt trace the advent of modem sports
1See Elizabeth Eisenstein. 1983. The Printing Revolution in Early Modern Europe. New York: Cambridge University Press.
E-Sports Broadcasting 25
media to recaps of prize fighting in the Penny Press age of the 1830s. With increased circulation
to the middle and lower classes, sports coverage increased substantially in the mid-nineteenth
century. Sports coverage in the Penny Press era focused on creating spectacular depictions of
sporting events. As McChesney, a media historian points out, James Gordon Bennett, owner of
the New York Herald,was "one of the first exponents of 'sensationalism' as a means of
generating circulation, and sport fit comfortably within this rubric" (1989, 51) Out of the
sensationalism present in these early newspapers, sports began to take on more significant
cultural meaning.
There was particular focus on regionalism and nationalism. Sports media scholar J.
Enriquez explains that sporting events were far more likely to be covered if they featured a
contest which reflected the social preoccupations of the day such as a northern horse racing
against a southern horse, or an American boxer fighting a European (2002, 201). Through these
mediated depictions, sporting events were encoded with much more meaning than a simple
contest. They reflected the contemporary hopes and anxieties of the people. Sports media built
up athletes as representatives. Newspaper recaps did much more than simply describe the
actions; they created dramas (McChesney 1989, 51). The hyped up imagery of athletes and their
contests created through the Penny Press and sports magazines became the paradigm for sports
coverage for decades while a new sport caught America's attention.
Newspaper Sports Writing and the Rise of Team Sports
The rise of baseball as a national pastime coincide with the period of time just after the
American Civil War. McChesney explains, "The Civil War introduced baseball to an entire
generation of Americans, as the troops on both sides played the game when time permitted.
Indeed, baseball emerged as the preeminent national team sport during this period" (1989, 52).
E-Sports Broadcasting 26
After the Civil War, baseball helped mediate conflict by providing common ground for
northerners and southerners. This moment was one in which the country was seeking to heal its
rift, looking for neutral things that could bind the nation together. Baseball filled a political
agenda by giving people something to focus on without opening old wounds. Sports writing
changed drastically in the years following baseball's spike in popularity. Sports coverage began
to receive regular columns and increased coverage throughout the late nineteenth century,
leading to a new kind of journalistic specialization: the sports-writer (Enriquez 2002, 202). This
fixation on sport was a result of new socio-cultural environments. Mandelbaum (2004), a sports
media scholar and historian, argues that the industrial revolution created a new sports landscape
through several major developments. First, the notion of childhood had expanded. In the
nineteenth century, the period between birth and entering the workforce increased substantially.
The new notion of childhood permitted more people to engage with baseball, football, and
basketball. This increased interest in team sports continued into adulthood. Watching and reading
about sports in the newspaper or sports magazines became an acceptable way to recapture the
"carefree years of their lives" (Mandelbaum 2004, 2). Mandelbaum also argues that baseball
offered a renewed connection to pastoral America, creating a feeling of nostalgia for the new city
dwellers and factory workers who desperately missed the pace and beauty of rural America.
Baseball coverage created the first major feedback loop between sports and media in
America. Bryant and Holt claim that the importance of sport was downplayed significantly in the
puritan era, but, "regular, routine reporting of sports in newspapers and specialized magazines
helped shift the cultural attitude towards sports in general" (Bryant and Holt 2006, 25). They
argue that in the late 1870s through the 1890s, Americans adopted a new stance on sports as
important for the development of mind, body, and society. This new cultural stance on sports
E-Sports Broadcasting 27
was shaped and fostered by an increased media coverage of sports. As baseball and its media
coverage became more professionalized, Americans began to consume sports media in
completely different methods. Sports spectatorship became a regular and acceptable pastime for
the industrial worker.
The industrial revolution created the first opportunity in America for sports production
and spectatorship to be commercially successful endeavors. The growth of cities and the massive
developments in individual mobility allowed for sporting events to take on new significance
(Mandelbaum 2004, 3). Cities provided large numbers of sports players as well as spectators to
fill newly built stadiums and watch newly formed teams. Sports fandom in the U.S. fit neatly
into the predominant forms of labor and leisure. Zillmann and Paulus (1993), two psychologists
who wrote on sports spectatorship, explain, "spectatorship, as a significant form of recreation, is
an outgrowth of the monotony of machine-dictated labor, sports events became the weekend love
affair of all those whose workday was strictly regulated by production schedules" (601).
Zillmann and Paulus' article further supports the feedback between sports media consumption
and societal structures. Live spectatorship in America had previously been seen as a luxury for
the rich and powerful, but with the increased circulation of newspapers, and in particular sports
coverage, to the middle and lower classes, sports spectatorship became accessible to an entirely
new sector of the population (Bryant and Holt 2006, 21). Architecture once again emerged as an
important medium. Large concrete and steel stadiums were created, replacing the more
organically created playing fields of the late nineteenth century (Mandelbaum 2004, 52). We see
here an important transition into the production of sport as a money making opportunity. As I
discuss in the third chapter, the introduction of investors and producers fundamentally alters
sports and their media counterparts.
E-Sports Broadcasting 28
The available media shaped the portrayal and perception of athletics in the industrial era
as well. The idea may sound a bit romantic, but Benjamin Rader (1984), a sports scholar focused
on the transformation of sports media in America, labels the period of sports media prior to
television as an era of heroes. Whether speaking of prize-fighters or the Mighty Casey of
folklore, sports media in the industrial era painted athletes as larger-than-life characters. Rader
claims, "[t]hose standing on the assembly lines and those sitting at their desks in the
bureaucracies increasingly found their greatest satisfaction in the athletic hero, who presented an
image of all-conquering power" (1989, 16). To Rader, sports media before television presented
the American ideal. Athletes were meritocratic role-models playing for the love of the game.
Rader's analysis places the impetus on newspapers to depict dramatic stories with characters
akin to David and Goliath.
In addition to individual mobility, urbanization, and industrial work, Enriquez attributes
the rise and legitimacy of sports journalism as the catalyst for the nationalization of sports in
America (2002, 201). As all forms of communication and nationalization were transforming,
sports coverage lead the charge. In the early twentieth century, most newspapers had dedicated
sports writers on staff. These sports writers became famous through their innovative and
entrancing writing. Writers like W. 0. McGeehan, who worked for many San Francisco papers,
described athletes as sorrowful sages and their contests as the clashing of titans on a battlefield
(Nyhistory.org 2015). In this period however, it is difficult to judge the difference between
journalism and public relations (Bryant and Holt 2006, 30). In fact, the issue of PR penetrating
journalism in the late nineteenth to early twentieth century is explicitly laid out in Michael
Schudson's (1981) chapter, "Stories and Information: Two Journalisms in the 1890s". At the turn
of the century, there existed a dichotomy between news as entertainment and news as
E-Sports Broadcasting 29
information. As papers around the country struggled to define themselves, sports media also
went through a defining period. Legitimate sports writing became known for its higher literary
quality, but read more like advertisements with its exaggerated, often hyperbolic, language.
Public relations soon became as much a part of sports journalism as describing the events
themselves. Team owners understood the media's role in keeping attendance at sporting events
up and began catering to sports journalists for coverage (Enriquez 2002, 206). The team owners
expected sports journalists to act as publicists for their events. The gambit paid off as sports
writing filled more and more of the daily papers and attendance at live events continued to rise.
The sports writers added significance to the experience of watching a sporting event. Between
the shifts in the American middle class, leisure activities, and the flowery language of sports
journalism, watching a sporting event began to take on the significance of watching history
unfold. We will see these same issues appear again in e-sports coverage as journalism becomes a
legitimizing force within the e-sports landscape, torn between deep analysis and hyped-up
depictions for the sake of generating publicity.
Liveness continued to assert its role in sports media as new technologies emerged. The
telegraph especially placed the impetus on news sources to provide timely information. In a
fascinating illustration of the desire for timely sports news, the ChicagoTribuneran the
following note on March 17, 1897, the day of the legendary boxing match between Jim Corbett
and Rob Fitzsimmons: "The Tribune will display bulletins today on the prize fight. It has secured
a telegraph wire to the ring in Carson City and a competent man will describe the progress of the
fight, blow by blow, until the test is decided. The bulletins will be posted thirty seconds after
they are written in the far Western city" (Bryant and Holt 2006, 29). This fixation on live updates
for sporting events across the nation is another example of how sports media has shaped the
E-Sports Broadcasting 30
media landscape of America. Information began traveling faster than ever via wireless
transmissions, but it was actually a yacht race which saw one of the very first implementations of
wireless for live information transmission. Sporting events saw some of the earliest uses of the
telegraph for news reporting as well (Mott 1950, 597). As the telegraph allowed for a sense of
liveness even for remote events, it paved the way for the most significant development in sports
media prior to television: radio.
A Fixation on Liveness: Radio and Sports Consumption
Radio delivered on the push towards liveness established by the telegraph. The first
broadcast of a Major League Baseball game occurred within a year of the commercial release of
radio (Enriquez 2002, 206). Rader remarks, "Now the fan did not have to await his morning
newspaper; he instantly shared the drama transpiring on the playing field" (Rader 1984, 23). For
the first time, sports were perceived as home entertainment. Broadcasters as well as businesses
capitalized on the shift. Sports coverage was integral to the rise in popularity of radio in the
interwar period. In Rader's words,
In the pre-television era, the heroes of sports assisted the public in coping with a rapidly changing society. The sports world made it possible for Americans to continue to believe in the traditional gospel of success: that hard work, frugality, and loyalty paid dividends; that the individual was potent and could play a large role in shaping his own destiny (1984, 15).
By Rader's account, sports programming on radio delivered a much needed revitalization
of the American ideals through the transient industrial period and The Great Depression.
The rise of radio coincides with the golden age of baseball, but there was an awkward
transitional phase into the new medium while newspapers and radio both tried to define their
new boundaries. While consumers clearly desired liveness, initial radio broadcasts felt flat and
emotionless (Bryant and Holt 2006, 27). Some of the greatest blow-by-blow sports writers were
E-Sports Broadcasting 31
terrible at delivering a compelling radio broadcast. Sports writers were extremely adept at
creating dramas through print, but they failed to capture audiences in the early days of radio.
Oddly enough, their sports knowledge undermined their sports coverage in the new medium.
Instead, a new role emerged: the sportscaster.
In the era of radio, the performance of live sports broadcasts came with significant stakes.
Adept sportscasters were cherished more for their voices than their sports knowledge. Delivering
play-by-play depictions of sporting events takes little technical knowledge, instead the
entertainment comes from the delivery. Mandelbaum writes of early radio sportscasters, "the
broadcasters were akin to poets and troubadours who preserved and handed down the great tales
oftheir cultures by committing them to memory and reciting them publicly" (2004, 80). Delivery
was actually so important that sometimes sportscasters such as Graham McNamee, known
especially for his baseball broadcasts, were not even present at the event but instead handed
written play-by-play depictions of the game so that they could add their own dramatic and
authorial tone to the live event (Mandelbaum 2004).
Another issue during the emergence of radio was redefining the role of newspaper sports
coverage. Radio could deliver the liveness desired by sports fans and was incredibly well suited
for play-by-play commentary. Newspapers had traditionally covered the blow-by-blow report of
an event, capturing the drama through flowery language and hyperbole. With radio, the
sportscaster captured the audience's attention through the same means, bringing in even more
emotion as his voice rose and fell with the action of the contest (Enriquez 2002, 202). Sports
writers instead decided to focus on an area that radio broadcasters could not: strategy. Early
sportscasters had to focus so much on the delivery of the action that they could not elaborate on
the reasons behind certain maneuvers. Sports writers took advantage of this deficiency and began
E-Sports Broadcasting 32
writing articles which focused on everything around the action. From in-depth analysis of
strategy to the creation of larger than life athlete personalities, newspaper coverage of sports in
the era of radio completely changed to remain relevant.
Sports magazines also had to find a new space to occupy during radio's reign.
Completely unable to keep up with the live coverage by radio and the strategic coverage of
America's favorite sport, baseball, sports magazines instead began to focus on niche sports such
as yacht racing. The other innovation of sports magazines in the early 1930s was their addition of
full page color photographs of athletes, something that neither radio nor newspapers could offer
(Enriquez 2002, 202). They remained as an important sports medium but had been supplanted by
both radio and newspapers. Baseball's hold on the American public was so strong that the niche
sports, which were typically covered in sports magazines, hardly seemed relevant. Football in
particular rarely saw coverage anywhere other than sports magazines (Bryant and Holt 2006, 32).
Football had traditionally been seen as a college sport reserved for the wealthy, but with an
increasing number of college graduates in the U.S. and the rise of a new medium, its niche status
was about to change (Oriard 2014, vii).
The Televisual Transformation of Sport
Television's initial debut into the sports world was a colossal failure. Reaching only a
few hundred people, the first American televisual sports broadcast was a Columbia-Princeton
baseball game on May 17, 1939. Just a few years after the commercial release of the television in
the U.S., RCA's first foray into televised sport flopped. The New York Times' Orrin E. Dunlap Jr.
recounted on the following Sunday, "The televiewer lacks freedom; seeing baseball on television
is too confining, for the novelty would not hold up for more than an hour if it were not for the
commentator" (Rader 1984, 17). He goes on to say, "To see the fresh green of the field as The
E-Sports Broadcasting 33
Mighty Casey advances to the bat, and the dust fly as he defiantly digs in, is a thrill to the eye
that cannot be electrified and flashed through space on a May day, no matter how clear the air."
Bryant, Holt, Enriquez, and Rader attribute the failure of early televisual sports to several
factors. First, television camera technology was rudimentary and receivers were even worse
(Bryant and Holt 2006, 31; Rader 1984, 18). Viewers could hardly see the player, much less
follow the ball or action on the field. Second, television was not a commercial success upon its
release. Sets were expensive and did not offer nearly enough programming to warrant their price:
an issue that created a sort of negative loop as the television industry needed more viewers to
warrant more content yet could not supply enough content to attract more viewers. The third
factor, described by Enriquez, is the failure for broadcasters to adapt to the new medium.
Sportscasters could not actually see the video feed and casted the game as if they were still on
radio; recounting every single action that occurred on the field despite what was on viewers'
screens at home. Inexperienced camera operators had difficulty following the action and the
image rarely matched what the sportscaster was describing.
Radio sportscasters also had difficulty transitioning into the new visual medium because
they could no longer provide the same level of drama through exaggeration and hyperbole.
Where short infield ground balls could previously be described as laser-fast bullets, the viewers
at home now saw that the play was just another ordinary event. Situated somewhere in between
watching the game live at a stadium yet still sounding like radio, televisual sport had a difficult
time defining itself in the late 1930s and early 1940s. According to Rader, televisual sport
experimentation stopped completely during the Second World War (1984, 23).
With the well-established roles of radio, newspapers, and sports magazines, the revival of
televisual sport seemed to be impossible. The utter failure of televised sports in the late 1930s
E-Sports Broadcasting 34
into the Second World War left televisual sport in a difficult position. Sports radio's popularity
was at an all-time high in the 1940s. Baseball had captured the hearts and minds of the American
people, and famous radio broadcasters such as Bill Stern and Jack Armstrong kept them listening
with bated breath (Rader 1984, 30-3 1).
Baseball and more generally live event sports spectatorship, however, could not keep the
nation content for too long. In what has been dubbed the Sports Slump of the 1950s by Rader
and others (Bryant and Holt 2006, McChesney 1989), spectatorship had finally started to
dwindle. Television sets were making their way into homes in record numbers after World War
11. In the post-World War 11 era, pastimes shifted from inner-city, public forms of recreation to
private, home-centered forms of recreation. Sports revenue was down and change was in the air.
People could watch baseball on their television sets at home, but not many people wanted
to. As shown by the earlier quote from The New York Times, television had difficulty containing
the magic that baseball once held. Football, however, was poised to rise with the new medium. It
had been long overlooked, but football was incredibly well suited for television broadcasts. The
large, visually distinct ball and typically slow moving action provided an acceptable subject for
contemporary television camera technology (Grano 2014, 13). College football had seen a bit of
success in newspapers, but professional football had a negative reputation as a "perversion ofthe
college game played for alma mater rather than a lousy paycheck" (Oriard 2014, vii). Radio
broadcasts of football had never reached the same level of success as baseball.
Professional football seemed to be a sport without a suitable medium. As sports media
scholar Michael Oriard explains, "[o]nly television could give the professional game a national
audience, and Pete Rozelle's defining act as the commissioner who ushered in the modem NFL
was to market the league through a single television contract, rather than leaving clubs to work
E-Sports Broadcasting 35
out their own deals" (2014, vii). This deal with broadcasting giant, NBC, led to the NFL's great
breakout story and what would soon become the model for televised sports (Rader 1984, 85).
With the NBC still losing money on a dwindling sports fanbase, they were ready to pull the plug
on their deal with the budding NFL until the championship match between the Baltimore Colts
and the New York Giants of 1958 (Grano 2014, 13). This match, still hailed as the 'Greatest
Game Ever Played', would become the longstanding origin story of televised football. The game
went into a second overtime, pushing the broadcast into prime time on the East Coast, a slot in
which NBC never dared to place professional football. As millions of Americans tuned in for
their regularly scheduled programming, they instead found John Unitas and his Baltimore Colts
scoring the game winning touchdown after a long, hard-fought battle. Oriard, Rader, Grano,
Oates, and Furness all trace the NFL's commercial success to this one defining moment.
As compelling as origin stories often are, the truth is that many other factors lead to the
success of football in the new mass medium. New technologies such as video tape were integral
to the rise of football in America. Hitchcock argues that instant replay in particular helped with
the rebranding of professional football: "The use of video-tape gave the game of football a whole
new image... The instant replay changed football from brutal, quick collisions into graceful
leaps, tumbles and falls. It gave football an aura of art in movement. It made football attractive to
entirely new segments of the audience" (1989, 2). Where football players had once been seen as
lethargic brutes, instant replay allowed broadcasters to slow down images, dissect plays, and
highlight the athleticism of players (Rader 1984, 83-84).
Sports, with football leading the charge, were once again on the cutting edge of media
adoption. According to Dylan Mulvin, the first documented use of instant replay for review and
training purposes was in 1957 during a game between the Los Angeles Rams and the San
E-Sports Broadcasting 36
Francisco 49ers (2014, 49). By 1964, instant replay was a standard broadcasting technique across
all sports. The NFL's willingness to adapt to the new medium set it apart from other sports at the
time.
In addition to these technological and legal advances, Bryant and Holt as well as
McChesney argue that one particularly innovative producer reinvented sports broadcasting for
television: Roone Arledge. With ABC's full support, Arledge established television broadcasting
conventions still present today. After the 1958 Championship game between the Colts and the
Giants, ABC was scrambling to catch up to the NBC's success in televised sports broadcasting.
As Enriquez describes, "Television broadcasting affected different sports in different ways. It
devastated boxing, had mixed effects on baseball, and proved a boon to college and professional
football" (2002, 202). As NBC began to ride the wave created by the NFL, ABC looked to get in
on the action.
Arledge was given free rein to perform a complete overhaul of ABC Sports. Bryant and
Holt argue that the single most important innovation Arledge brought was the notion that a
televisual broadcast should be presented "from the perspective of what the typical fan would see
if he or she attended the game live" (Bryant and Holt 2006, 33). Arledge (2003) believed that the
broadcast should capture the essence of attending a game, not just the play on the field, but the
roar of the crowd, the cheerleaders, the marching bands, and the coaches on the sidelines. As
Enriquez describes, "under Arledge, television assumed every role previously played by print
media; it served as the primary medium for experiencing events, it provided detailed analysis,
and it gave human faces to the participants" (2002, 205). Through football, televised sports were
able to set conventions which separated them from earlier forms of media. This transition lives
E-Sports Broadcasting 37
on in live-streaming today as we will see later with live-streaming's adaptation rather than
transformation of televised sport.
The arrival of television meant that sports radio and print media had to redefine their role
in sports coverage. Television could deliver the liveness of radio and, with the help of
commentators and technology like instant replay, the drama and dissection of strategy found in
print media. Newspaper coverage of sports was now relegated to simple recaps. Sports
magazines on the other hand rode the success of television. As Bryant and Holt assert, "Sports
Illustratedoffers a classic example of an old medium responding to a new one" (2006, 36).
Rather than seeking out an area left uncovered by television, Sports Illustratedsupported
televised sports by providing innovative action photography and updates on the most popular
athletes and teams at the time.
Sports broadcasts of the 1960s were infused with the hopes and fears of the Cold War
era. R. Powers, a television sports scholar, suggests that sports filled a void in the American
public, "shrugging off the darker morbidities of the Cold War and McCarthyism" (1984, 118).
The re-found focus on sports as spectacle established by "the youthful theme of ABC, echoed the
Kennedy idealism of the new frontier, the sporting emphasis echoed Kennedy's image of
muscular athleticism..." (Whannel 2002, 34). Entertainment sports media, with its art-in-motion
presentation, delivered a message of newness and regeneration to American.
Through broadcasting and advertising deals, sports helped build and perpetuate the
growing conspicuous consumption movement and the capitalist ideals of post-war America.
Athletes resumed their star status. Sports stars began appearing in advertising everywhere.
Merchandising became a key part of sports promotion. Anything from replica jerseys of sports
stars to blankets and flags with team branding can be found almost anywhere in the U.S.
E-Sports Broadcasting 38
Contemporary Sports fandom has come to mean much more than simply following a team. It
means buying a team's products, playing sports video games, joining fantasy leagues, and
watching sports entertainment television. Oates, a sports media scholar focused on the NFL,
writes that fandom has been transformed by the presentation of athletes as commodities to be
consumed selectively and self-consciously by sports fans (2014, 80). The previously subcultural
hyper-fandom activities such as fantasy football and sports video games, Oates argues, have
moved into mainstream prominence and profitability. Fans are invited to interact with athletes as
vicarious managers in fantasy sports, offering a completely new, personally tailored form of
interaction with sports organizations. This new drive for constant connection and feedback
within the sports industry culminates with live-streaming.
Live-Streaming: Constant Connection
As Oates suggests, sports fandom has fundamentally changed to reflect an increased
involvement on the part of the spectator. Athletes and personalities have become commodities
for fans to interact with. Social media, fantasy sports, and video games have created a connection
to sports stars that was never before available in other media. At any moment, a spectator can
catch highlights on ESPN, head over to forums to discuss major sporting events, or load a stream
of a match on their phone, all while tweeting at their favorite athletes with the expectation that
their words will be received on the other end.
Recent trends show a change in the sports media landscape as new platforms begin to vie
for control over sports broadcasting in the US. The NFL has recently signed a deal with Google
allowing for the streaming of games over the internet after their current contract with DirecTV
ends in 2015. This deal reflects the changing media landscape in the internet era. The rise of new
streaming platforms poses an interesting dilemma to the current media titans and new
E-Sports Broadcasting 39
opportunities for new forms of media sports. Thus far, using the tradition established by
McChesney, Bryant, Holt, and Rader among others, I have used sports media as a lens through
which to view particular socio-cultural moments in America. I now turn that lens towards the
contemporary sports media landscape. What can we learn about our own social moment by
looking at the use of streaming platforms for traditional sports or the arrival of e-sports as an
entirely new form of professional competition that makes use of older forms of media, but
thrives in live-streams and video on demand?
The MLB offers an early case study into the use of live-streaming for major league sports
broadcasting. The regular season in the MLB consists of 2,430 games, a staggering number
compared to the NFL's 256. The sheer number of regular season games held each year causes a
problem with over-saturation. This inundation of content lowers the value of each individual
game in the eyes of the major networks (Mondelo 2006, 283). The games that these networks
choose not to air due to scheduling conflicts previously caused many games to go unseen by fans
outside of the local media market for the two competing teams. To remedy the situation, the
MLB streamed over 1,000 regular season games online starting in 2003. The launch of MLB.tv
in 2002 allowed engaged MLB fans to continue watching content even when they did not have
access to the games through the major networks. While not initially a huge commercial success,
MLB.tv still runs today, over a decade later at a monthly subscription of $19.99 and as of 2014
incorporated both post-season games and the World Series as part of the package (MLB.tv
2015). While the MLB has not released the official revenue totals for its live-streaming service,
with 3.7 million subscribers the platform generates well over $400 million per year (MLB.tv
2013). This little-known use of live-streaming shows a hunger for immediate interaction with
sports media regardless of the available medium.
E-Sports Broadcasting 40
Early live-streaming fundamentally looks and feels like television, but it filled a role
which network television could not: all access and constant connection to media. It took form on
a new platform, but did not truly differ from television. Early live-streaming is more like an
adaptation of television than a new medium. Rather than creating something new, the early foray
into live-streaming by the MLB simply adapted the already present broadcasting infrastructure
and applied it through a different avenue. Television is often invoked in live-streaming. If we
look at MLB.tv, the .tv signifies its connection to television, but that domain is actually the
official domain for the country of Tuvalu. Other streaming platforms like ustream.tv, twitch.tv,
MLG.tv, all based outside of Tuvalu, use the same domain to signal their televisual connection.
Live-streaming emerged at a very particular moment in the evolution of sports media.
With air-time limited on the major networks, the internet allows a near infinite amount of content
to reach sports fans. As Oates would argue, from fantasy sports, to blogs, to live-streaming, the
internet is, for many, the new space of the sports fan. Live-streaming goes beyond the ability of
other media to reach viewers wherever and whenever, whether from a home computer or a
mobile device. Live-streaming delivers on the constant connectedness expected by consumers
today. At its roots, live-streaming is a televisual medium. So what separates it from television?
Live-streaming today has created its own niche by blending other forms of media. Most
live-streams host an internet relay chat (IRC) in addition to the audiovisual component of the
broadcast. This IRC allows viewers to chat with other audience members and often the
broadcaster, a functionality not currently available in television. This live audience connection in
live-streaming is unparalleled in television. Hamilton et al., in their investigation of the
significance of live-streaming for community creation, situate Twitch streams as an important
'third place' for community. Building on the work of both Oldenberg and McLuhan, Hamilton et
E-Sports Broadcasting 41
al. (2014) suggest that "By combining hot and cool media, streams enable the sharing of rich
ephemeral experiences in tandem with open participation through informal social interaction, the
ingredients for a third place." The third place that the authors point to creates a rich connection
akin to interpersonal interaction. The ephemeral nature of these interactions creates a deep sense
of community even in streams with hundreds of thousands of viewers. Live-streaming and in
turn, the IRC associated with streams creates a shared experience tantamount to the "roar of a
stadium" (Hamilton et al. 2014). These streams also pull in a global audience, connecting
isolated audiences into one hyper-connected community. Live-streaming draws on television for
its look and feel, but delivers not only on the desire for liveness perpetuated in sports media but
also the hyper-connectivity present in today's globalized world.
E-sports, Live-streaming, and Sports Media
Many factors contributed to the success of live-streaming for e-sports. It arrived at a
moment when television seemed closed to e-sports, it was much less expensive to produce, and
much easier to cultivate. Television broadcasts are prohibitively expensive to produce. Early
attempts at airing e-sports on television have typically flopped, rarely surviving past a second
season. E-sports are difficult to film when compared to traditional sports and conventions had
not yet been set for the televisual presentation of e-sports (Taylor 2012). The action in traditional
sports can typically be captured by one shot. E-sports broadcasts, in contrast, must synthesize
one cohesive narrative out many different player viewpoints with varying levels of information.
In a game like CounterStrike, broadcasters must wrangle with a large map with ten players in
first-person perspective. The resulting audiovisual feed is a frantic attempt to capture the most
relevant information from the players with an outside 'observer' controlling another viewpoint
E-Sports Broadcasting 42
removed from the players' point of view. The observer functionality in the early days of e-sports
broadcasting created a difficult barrier to overcome for commercial success on television.
Observer functionality had not yet become a focus for game developers and commentary had not
reached the level of competency it has in more contemporary broadcasts.
Instead of finding success on television, e-sports pulls in millions of concurrent viewers
on live-streaming sites such as Twitch.tv. With television seemingly out of reach and streaming
requiring significant investment per event in the early 2000's, e-sports broadcasting remained
relatively stagnant until the arrival of a reliable, and cheap, live-streaming platform. Justin.tv
(and other similar sites like UStream and Stickam), which launched in 2007, delivered exactly
what e-sports broadcasters needed to grow. The site allowed users to quickly and easily stream
content online with the use of some relatively simple software. Both broadband internet reach
and streaming technology had developed to a point that lowered the barrier of entry for
broadcasters. Players from around the world streamed games from their bedrooms. E-sports
broadcasters reached new, massive audiences.
The success of gaming content on Justin.tv spurred a new streaming site dedicated solely
to gaming. The games-centered streaming site, Twitch.tv, launched in 2011. Twitch.tv
revolutionized the e-sports industry. Each of the casters I interviewed spent time detailing the
importance of Twitch.tv without being prompted. As one explained, Twitch.tv is "the clearest
driving factor that's grown e-sports over the past 2-3 years." As mentioned in the introduction, e-
sports audiences have reached previously unheard of levels. Large scale e-sports events regularly
see concurrent viewer numbers in the hundreds of thousands. These broadcasts still largely
resemble televised sports however, rarely, if ever, making use of the IRC.
E-Sports Broadcasting
43
Live-streaming is just one of the forms of media the e-sports industry makes use of. In
fact, e-sports interacts with most media in the same ways that traditional sports have. The e-
sports industry pushes back into almost all of the earlier forms of media discussed in this chapter.
Print and radio typically fill a PR role in e-sports coverage. Large events or developments often
make their way into publications like The New York Times. Local radio segments will
occasionally feature summaries of e-sports events occurring nearby. Internet versions of both of
print and radio sports coverage are fundamental segments of the e-sports media ecosystem.
Podcasts, digital audio files available on the internet through downloads or streaming, vlogs, and
video diaries fill essentially the same role for e-sports that radio currently plays for traditional
sports. Experts weigh in on recent developments and players breakdown certain aspects of a
game.
E-sports journalism has also immerged as a legitimizing force within the industry. Sites
like ongamers.com and esportsheaven.com keep fans abreast of any new developments in the
professional scene for all of the major e-sports titles. Journalists like Richard Lewis add
legitimacy to e-sports through their coverage of current events. Their recaps of developments as
well as summaries of various tournaments and leagues closely resemble their print counterparts
in sports coverage. It is clear that the e-sports industry is in conversation with many forms of
media. Many of the forms and techniques are borrowed directly from sports coverage. These
forms of media did not appear instantly however, they are the result of years of push and pull
with the larger sports media landscape. Nowhere is this more apparent than in the commentating
of e-sports live-streams.
E-Sports Broadcasting
44
Chapter 2
Shoutcasters Collecting Conventions
E-sportscasters, often referred to as shoutcasters, both look and sound like professional
sportscasters. Their attire and cadence both create an instant connection to televisual sports.
Having never seen a game of Starcraft 2 before, you may watch the flashing lights and
explosions with a perplexed look on your face. As you continue to watch, you hear two
commentators provide a narrative, stats fly across the screen, and you start to piece together the
game in front of you. After a few minutes, you know the two players who are facing off against
one another, you feel the excitement as they engage each other's armies, and a slight sting as the
player you were rooting for concedes the match with a polite "GG." The whole presentation feels
like a variant of Monday Night Football with virtual armies instead of football teams. From the
stat-tickers to the sound of the commentator's voice, you can almost imagine the ESPN or CBS
logo gracing the bottom corner of the screen. Shoutcasters have become a staple in e-sports. One
of the main signifiers of the 'sports' moniker professional gaming has taken on, shoutcasters lend
an air of professionalism to a scene which often struggles to define itself. By adopting the 'sport'
title, a precedent has been set for e-sports broadcasters which informs their style and
conventions.
Shoutcasters are important to investigate because they form a fundamental grounding for
e-sports which helps it to create its identity in the face of blistering turnover rates and constant
field shifts. E-sports stand in a unique position compared to traditional sports. Where players and
coaches in traditional sports often have careers that last for several years, e-sports personalities
E-Sports Broadcasting 45
suffer from intense turnover rates where professional careers can end within a year. E-sports
players burn out quickly and coaches rarely make a lasting name in the industry. The
recognizable personalities in e-sports are the few innovators and commentators who turned their
passion into a career. In this chapter, I analyze the role of shoutcasters within the larger
framework of the e-sports industry. I build much of this analysis on the foundation that Taylor
(2012) established in her investigation of the rise of e-sports. Much of Taylor's analysis still
holds true today, but some other developments in the field have created new dynamics within
shoutcasting that were not present during her initial encounters with shoutcasters. Understanding
how shoutcasters borrow from earlier forms of media, the issues they perceive within the
industry, and how they cultivate their own identity as shoutcasters while grappling with the
hyper-connection found in live-streaming as a medium allows us to grasp the relationship e-
sports broadcasting has with earlier forms of media while still creating its own identity. I begin
with a very brief look at the history of shoutcasting.
Shoutcasting History
One can see that even early attempts at broadcasting competitive gaming borrowed
heavily from its media contemporaries. Starcade,a 1982 show that ran for two years, marks one
of the first forays into e-sports broadcasting. Though the term e-sports had not yet emerged, the
show featured two opponents attempting to outscore each other on various arcade machines. If
we look to Starcade as an early example of e-sports, then the origins of e-sports commentating
resemble game show commentary found in Jeapordy! or The Price is Right. Watching Starcade
for the hosting alone reveals many similarities to other game shows: the host wears typical game-
show host garb, pleasantly explains every aspect of the competition, and speaks with the
E-Sports Broadcasting 46
broadcast voice we all recognize. Starcadealso shows the constant evolution of competitive
gaming coverage as it continued to refine its camera angles, presentation, and format over its two
year run.
The model which more closely resembles our modern vision of shoutcasting gained
momentum at the turn of the twenty-first century. The title shoutcaster comes from the early
streaming software used for e-sports broadcasting, SHOUTcast. While many people familiar
with e-sports may have no idea where the term comes from, a prominent shoutcaster, djWHEAT
(2012), claims that the title remains due to its signaling of the history of e-sports. SHOUTcast, a
media streaming program, arrived in 1998, allowing interested parties to broadcast audio
recordings to various 'radio' channels for free. SHOUTcast allowed for video streaming, but as
one early shoutcaster I interviewed lamented, the bandwidth and equipment required for video
streaming was prohibitively expensive.
Instead of the audiovisual broadcast we regularly associate with e-sports live-streams
today, early shoutcasters relied on audio recordings akin to early radio coverage of traditional
sports. These early broadcasts only streamed audio to a few hundred dedicated fans on internet
radio. Early shoutcasts follow the form of traditional play-by-play radio broadcasts, focused
primarily on presenting every development in the game. In interviews, veteran shoutcasters were
not shy about admitting the influence radio sportscasters had on their own style. One mentioned
that he spent hours listening to live sports radio to hone his own skills.
Early shoutcasters also performed many aspects of the production that they are no longer
required to perform in the more mature e-sports industry. They would attend events, set up their
own station, typically with their own laptop and microphone. It was a very grassroots affair.
E-Sports Broadcasting 47
With little experience in the technical aspects of broadcasting, the productions emulated as much
as they could from sports broadcasting to lend an air of professionalism.
With the arrival of Twitch.tv, and other reliable streaming platforms, much of the onus of
production was taken off of shoutcasters. Instead of acting as producers, directors, editors, and
on-air talent all at once as they had in the early audio-only streams, shoutcasters are now more
able to focus on the portion of their work from which they get their name. Shoutcasting after the
early days of internet radio has come to not only sound like traditional sportscasting, but also
look like traditional sportscasting.
Something Borrowed: Influences from Sportscasting
Wardrobe
Many ofthe shoutcasters I interviewed talked about wardrobe as a huge change within
shoutcasting, one that was spurred entirely by looking at traditional sportscasting. Most
shoutcasters got their start wearing t-shirts and jeans at various e-sports events. Today, you will
rarely find a shoutcaster not wearing a shirt with a blazer. Looking at the image below shows the
incredible shift in shoutcasting just within the last six years. Both images feature the same
Figure 2-Left: Joe Miller at 2009 Intel Friday Game London; Right: Joe Miller at 2015 Intel Extreme Masters World Championship in Katowice Poland. Image credit: ESL, Philip Soedler and Helena Kristiansson. Flickr.com/eslphotos
E-Sports Broadcasting
48
shoutcaster: Joe Miller. The left-hand image comes from the 2009 Intel Friday Game London
while the right-hand image comes from the 2015 Intel Extreme Masters World Championship.
While the images are quite similar, the professionalism apparent in the right-hand image
resembles a professional sportscaster. The gamer/geek vibe found in the left-hand image has
been removed from the shoutcasting image. As a few of the shoutcasters I spoke with admitted,
the drive to rework the shoutcaster wardrobe came purely from traditional sports. On top of that,
they pointed to a desire to shed the gamer/geek stereotypes that e-sports had come to inhabit. By
adopting professional attire, they felt that they could get rid of the old image and emulate the
professionalism of a sports broadcast. Wardrobe is not the only aspect of traditional sportscasting
that has made its way into shoutcasting.
Style
One of the more elusive aspects borrowed from traditional sports is the actual
commentary style. I use the term elusive here to signal the difficulty in pinning down exactly
why shoutcasters remind us so vividly of traditional sportscasters. Early shoutcasters had no
models outside of traditional sportscasting so they took as much as they could: "So as a
broadcaster we look at traditional sportscasting. We pull from that and then make sure it fits in
game casting." As it turns out, many sports commentary conventions translate well into game
casting. As such, the first generation of casters share many similarities with television
sportscasters. Most of these early shoutcasters admit to being influenced almost entirely by
traditional sportscasters. One caster explains, "Television is where we grew up, it's what we
watched. So clearly that's where we're going to pull from."
E-Sports Broadcasting
49
Shoutcasters typically have no media training, instead relying on mimicry of earlier
conventions to get by. As with most positions in e-sports, and similar to early sports writers and
radio casters, shoutcasters are just passionate fans turned professional. In conversations, they
each revealed a bit of their own personal history that pushed them towards broadcasting, but only
one ever mentioned having received any sort of formal training. Years into his shoutcasting
career, he "went back and did a journalism and broadcasting course for 6-9 months." Of
particular note, he mentions, "they did one really good project which was 'how to be a news
presenter'. They taught me the basics of that." The rest, he says, he learned on-air through
experience. The other shoutcasters I interviewed echoed this story.
Most of the shoutcasters I interviewed fell into shoutcasting through happenstance and
had to learn their craft on-air. Shoutcasters are akin to the very early television sportscasters who
had to reinvent their style during broadcasts like Bob Stanton, a radio sportscaster turned
television sportscaster who would send his friends to sports bars to gather feedback and
suggestions from audience members (Rader 1984). Echoing this inexperience and improvisation,
one shoutcaster I interviewed confided, "the first time I had ever been on camera, I sat down and
I was like, 'I have no idea how to do this.' I had done two and a half years of audio casting, but I
had never done video." Another caster recalls of his first show, "All I knew going into my first
broadcast was that I know this game. I know how it works, I know these players, and I play
against these kinds of players. I don't know how commentary works, but I can do this." After
these first, trial broadcasts, both of the above-mentioned shoutcasters admitted to going back and
watching traditional sportscasters to learn more about their craft.
Other broadcasting style conventions such as how to handle dead-air, how to end a
segment, or how to transition into gameplay were lifted directly from sportscasting. Paul
E-Sports Broadcasting
50
"ReDeYe" Chaloner, a prominent personality within the e-sports industry, addresses each of
these techniques in his primer on becoming a professional shoutcaster, constantly pointing to various examples from traditional sports broadcasting to illustrate his points. In his section on dead-air, Chaloner writes, "[o]ne of the best pieces of advice I had for TV was from legendary
sports producer Mike Burks (11 time Emmy award winner for sports production) who told me 'A
great commentator knows when to shut up and say nothing"' (2009, 9). Chaloner uses traditional
sports broadcasting as a way to explain shoutcasting, a clear indication of its influence on e-
sports broadcasting.
Content Analysis: Play-by-play and Color Commentary in the NFL andLCS
Another convention lifted directly from traditional sports broadcasts is the arrangement
of the casting team. Traditional television sportscasters fall into one of two roles: play-by-play or
color commentary. Shoutcasters use these same two roles. Both sports broadcasts and e-sports
broadcasts feature one of each type. The play-by-play commentator narrates the action, putting
together the complicated and unconnected segments of the game into a cohesive narrative. The
color commentator provides their in-depth analysis of the game, typically from the stance of a professional player.
Shoutcasters have adopted the two-person team directly from traditional sports
broadcasts. The path to each role follows the same pattern as well. An ex-professional player
almost always fills the role of color commentary in both traditional sports and e-sports. Their
insight is unparalleled. Color commentators attempt to breakdown complex series of events or
highly technical maneuvers as if they were still a professional player. In the words of one e-
sports color commentator, "I'm not pretending to be a professional player, but I'm doing my best
E-Sports Broadcasting
51
to emulate them." He goes on to say, "You can read up on it and study it as much as you like, but unless you've lived it, you can't really comment on it." In comparison, a play-by-play
commentator does not need to have the technical depth, but relies more on presentation. Even
though a play-by-play commentator has most likely played hundreds of hours of whichever game
they cast, they cannot fill the role of the color commentator. This dynamic allows for play-by-
play commentators to switch games with relative ease whereas color commentators, both in
traditional sports and e-sports, are locked into one game.
To illustrate the emulation of sports broadcasting found in e-sports, I now turn to a brief
content analysis of the commentary found in a regular season NFL game and a regular season
League of Legends Championship Series game. I start with the commentary from one play in an
NFL game. After presenting the traditional model, I move to the commentary from one team
fight in League of Legends to demonstrate how the convention has been adapted for e-sports
commentary. In both cases, I have removed the names of players, commentators, and teams to
cut down on jargon and clutter. Each case exhibits the dynamic present in the two man
commentary team.
NFL
With both teams lined up, the play begins and the play-by-play commentator comes in immediately.
Play-by-play: Here's [player 1] out to midfield, a yard shy of a first down. [player 2] on the tackle.
After the play has ended, the color commentator takes over.
Color: It's been [team 1] on both sides of the ball. Whether it be defense and the way that they dominated this ball game and then offensively, the early going had
the interception, didn't get much going over the next couple of possessions offensively but since that time, [player 3] has been very precise in how he has thrown the football and they just attacked this defense every which way.
E-Sports Broadcasting
52
LCS
Three members ofthe Red Team engage Blue Team atRed Team's turret
Play-by-play: This is going to be dangerous. Doing what he can to hold out. They're going to grab the turret, the fight will continue after the shield onto [player 1] is already broken. He gets hit, the ignite is completely killing the
ultimate! He gets hit by [player 2] who turns around again and heads back to [player 3].
With the action overfor the moment, the colorcommentatorbegins to speak
Color: I thought he finished a camp here too...
The color commentatoris cut off as two more members ofBlue Team attempt to attack.
Play-by-Play Heyo, as the top side comes in here too. [player 1], will he hit a good ultimate!? Oh! They were staring right at him but now he's just left to get shredded apart here. They couldn't have thought that this was going to go well for them.
With thefightconcluded, thecolorcommentatorcontinuesagain.
Color: Is this just the week of chaos? Because that was a really really uncharacteristic lapse in judgement from [Blue Team]: Not calling everybody into
position at the right time, and [Red Team] with the advantage make them pay for it. They didn't expect the ignite from Nautilus. I think they expected Nautilus to
have exhaust instead, but [player 1] pops the ignite, and as we said there is no armor so [player 2] just... and it continues!
The color commentator is cut off once again as the two teams engage one another for a third time.
If we look at these examples for their content rather than the specific moment in the game we can
catch a full illustration of the two-caster dynamic. As we can see by the NFL example, the play-
by-play commentator provides a running narration of the action in the game. When the action
ends, the color commentator provides the meta-level analysis of the unfolding events. In the LCS
example, we see that the same dynamic is present, however, due to the continuous action in the
game, the transition into color commentary becomes difficult. In the first lull, the LCS color
E-Sports Broadcasting
53
commentator tries to insert his analysis, but he is cut off by a second engagement. The color
commentator stops talking immediately and allows the play-by-play commentator to continue
describing the action. After the engagement ends, we hear the color commentator pick up again, explaining why the fight developed the way it did as well as his insight into why the teams played the way they did.
Entertainment and Narrative
Entertainment value was a repeated concept in my interviews with shoutcasters. Some
went so far as to claim that their role was only to entertain. One stated, "I want to get you
excited. I want to get you to watch the game as if it was a show on television." Many would
point to good sportscasters as an example to follow. If we recall the example of the early days of
radio sportscasting, casters had a difficult time making the transition to the new medium. Their
broadcasts felt flat when compared with their print counterparts (Bryant and Holt 2006, 27).
Early sportscasters got locked into the idea that their responsibility was to provide the basic play-
by-play depiction of a match. The golden age of sports radio was brought in by popular
sportscasters, such as Graham McNamee, who were so popular that they'd be asked to cast
games remotely. McNamee, like a live version of his print counterparts, was famous for creating
florid depictions of the game, athletes became heroes and their play became combat as told by
McNamee. While the presentation of live and accurate information was still essential, popular
radio sportscasters shifted sports media from news reports to entertainment. Sportscasters are
responsible for this shift. Without their expert embellishment, play-by-play depictions lack
entertainment value.
E-Sports Broadcasting
54
Even non-sports fans can feel the excitement from a particularly good sportscaster. The
game they portray is far more intriguing than any actual events happening on the field (Bryant,
Brown, Comisky, and Zillmann 1982). This disconnect forms one of the primary reasons that the
transition to casting televised sport was so difficult. The small liberties that sportscasters took
were no longer acceptable in the visual medium. Once the home viewer could see the game,
commentary had to shift to accommodate more scrutiny. Radio sportscasters were notorious for
their embellishment. As Bryant, Comisky, and Zillman note from one of their several
investigations of sportscasting, roughly forty percent of commentary is dramatic embellishment
(1977). In 1977, the authors tracked the amount of hyperbole and exaggeration in sports
broadcasting and found that over half of the speech was dedicated to drama. E-sports
shoutcasters, by comparison, rarely use dramatic embellishment of action. A few of the
informants noted that they feel that embellishing actions is not possible due to their audience.
The e-sports audience as pictured by shoutcasters, includes mostly dedicated players.
While many sports fans may play their sport casually, e-sports fans engage with the games they
watch regularly. As one shoutcaster explains, "we've only ever gone out to a hardcore audience."
He acknowledges that the current audience is in flux, but the primary base of e-sports fans are
intensely dedicated viewers and players. Because of this dynamic, shoutcasters feel that
embellishment of the actions on screen would be difficult to slip past a discerning eye. Their
belief that dramatic embellishment isn't possible may say more about their understanding of
traditional sports fans than it does about their formulation of their role as commentators. While
unacknowledged in interviews, the possibility for shoutcasters to add embellishment exists. Their
choice not to use embellishment speaks more to their formulation of the e-sports audience than it
E-Sports Broadcasting
55
does to their casting quality. Instead of embellishment of action, shoutscasters rely on another
convention found in traditional sportscasting: narrative.
Studies that focus on the media effects of sportscasting suggest that sportscasters
fundamentally alter the audience perception of the telecast through story-telling and narrative
(Krein and Martin 2006). Sportscasters take many liberties in their descriptions of the game to
add a dramatic flair. In several empirical studies, Bryant, Brown, Comisky, and Zillman (1979)
found that when sportscasters created a narrative of animosity between players, viewers felt an
increased amount of tension and engagement. They conclude that the narrative scope of the
sportscaster is critical in the perception of sports broadcasting. This narrative creation has bled
into shoutcasting as many shoutcasters attempt to amplify the emotional content of their games
by highlighting underdog stories or hyping up animosity between players. One caster I
interviewed connected his work to the narrative creation in sports commentary by stating,
"Emotion is one of the key words in commentary. You need to be able to connect a certain
emotion to the words you're saying. You need to be able to make someone scared for their
favorite player or overjoyed when they win. Create greatest enemies. You need to be able to
make these feelings through what you say or how you say it. Emotion is everything." This caster
goes to great lengths to dig up statistics from previous matchups to provide a narrative for the
match he casts. Through this investigation, the shoutcaster is able to contextualize a match with a
rich history. Perhaps two players have met three times before and each time the result has been
the same. Will viewers be able to share in the momentous victory of the underdog? As part of
their preparation, shoutcasters will research all of the previous meetings between two players to
create a history between them, a tactic which they acknowledge has been used in traditional
sports for decades.
E-Sports Broadcasting
56
Production
Stream production is another realm where e-sports have started to borrow heavily. While
e-sports producers may have gotten a head start on streaming live events, they often rely on the
expertise of television producers to put a show together. Multiple shoutcasters pointed to a
steady influx of television producers making their way into e-sports, "the way we approach a
production is very much like television. A lot of the production guys that are getting into it are
from television." In fact, the executive producer of the League of Legends Championship Series, an immensely popular e-sports program, is former emmy-winner Ariel Horn. Horn won his Emmy as an associate producer of the 2004 Olympics for NBC. Likewise, Mike Burks, executive producer for the Championship Gaming Series mentioned in the above quote from Paul Chaloner, had an immense amount of experience in televised sports before migrating to e- sports. These are just two of the many experienced television producers making their way into e- sports. Their style is beginning to show as e-sports events become more polished every year. If we recall the image of Prime Time League in the introduction to this thesis, we can see the influx of television conventions in e-sports from the production side. The shoutcasters benefit from the experience of working with television producers to refine their style. As the field has grown, however, we begin to see minor tweaks in style and delivery. Spending a significant time with e- sports casting, in comparison with sportscasting, reveals several distinctions. Much of this difference comes with the age of the field, but just as Starcadeevolved over its short lifespan, shoutcasters have found ways to make themselves unique. Their understanding of their role within the overall e-sports industry informs us of some of the key differences here.
E-Sports Broadcasting
57
Something New: Shoutcaster Identity
Shoutcasters are situated somewhere between fan and professional. As evidenced by the
above investigation of how shoutcasters are informed by their traditional predecessors, the role
of shoutcasters is still very much in flux. Shoutcasters are just recently creating their own
identity separate from their sportscasting roots. In particular, the less experienced shoutcasters I
spoke with use markedly different models to inform their own casting.
The Second Generation of Professional Shoutcasters
A second generation of casters is just now coming into the scene. Instead of looking to
traditional sportscasters as their models, they emulate veteran shoutcasters: "my influences are
the streamers that I watched. I watched everyone who casts and commentates...my commentary
style comes from those guys. I don't know how much is conscious or just mimicry." This new
caster has been on the scene for only a fraction of the time that the veterans have. In that time he
has honed his shoutcasting skills not by finding sports commentary and seeing which aspects
apply to shoutcasting, but by absorbing as much information as he could from other shoutcasters.
Another fresh shoutcaster offers a fascinating disconnect from the older casters: "I definitely
bounce off more e-sportscasters than sports. I just watch more e-sports than sports. Sports are so
different than e-sports, there's so little that I can actually use from them." Where his
predecessors admit to borrowing primarily from traditional sportscasters, this new generation has
left the realm of traditional sportscasting behind.
The professional casters provide material for an amateur level of shoutcasters to pull
from. The shoutcasters I interviewed were all professionals who typically work on major events
with massive support and budgets. With a robust network of shoutcasters to pull from, however,
E-Sports Broadcasting
58
we may see much more support for the grassroots level of e-sports that many early fans are
accustomed to. Current shoutcasters also provide a model for potential careers. Through the
hard-fought struggle of years-worth of unpaid events, the shoutcasters I spoke with have created
a legitimate profession worth pursuing. Most warned me that the path is no longer as easy as they
once had it. Most of them pursued shoutcasting for the love of e-sports. They had years to
fumble through persona creation, broadcast techniques, and conventions.
New, potential shoutcasters are automatically held to a higher standard. A senior caster
offered the following advice, "With how casting has changed, you need to be open to casting
multiple games. You have to be willing to learn. There is a lot we can teach a caster, but you
have to have some skills within you alone. You have to have some camera presence." The
mention of camera presence signals a significant jump from early shoutcasting. Just a few years
ago, the shoutcasters I interviewed sat down in front of a camera for the first time armed with
nothing but game knowledge; camera presence was a foreign word to them.
Perhaps the most significant change to casters is their overall level of experience. Some
of the shoutcasters I spoke with have been broadcasting for over a decade. Time has allowed
these casters to experiment and find their own style. As mentioned earlier, many of the minutia
involved in running a show take time to learn. Most casters got their start casually. They may
have been passionate about e-sports and created a role for themselves within the industry. Some
are former players who made the hard decision to give up on their hopes of winning big to
instead cultivate a community.
As new professionals, shoutcasters are just now coming together with the support of e-
sports companies under legitimate full-time contracts. The professional casters I spoke with all
acknowledged a significant change in their commentary since making the transition into full-time
E-Sports Broadcasting
59
casting with other casters around for feedback and training. One explained that he had never
been sure how to handle dead-air, moments when both casters are silent and there is little action
in the game. Through feedback sessions with other casters, he learned that there are some
appropriate times to let the viewer formulate their own opinions on the match. Heeding the
advice of veteran casters like Paul Chaloner, he went on to explain that one of the problems he
sees in shoutcasting more generally is that shoutcasters are afraid to just be quiet during a stream.
Part of the emotional build-up of a game, he explains, is letting the natural flow of a game take
its course without any input from the casters.
It will be fascinating to watch as these expert networks inform e-sports broadcasts across
the world. One informant remarked, "Now that we're all working together, we're learning a lot
off of one another, which hasn't happened in commentary before." Beyond allowing veteran
shoutcasters to compare notes, the professional status of shoutcasting provides training to new
shoutcasters. One veteran claimed, "All the junior people are learning so much faster than we
ever did. They're taking everything we learned over 5-10 years and doing it in months." These
veteran casters can now pass on their experience and their style. Techniques like hand-offs at the
end of a segment or transitions from the desk to gameplay often came up in my interviews as
issues which take years to learn, but newer shoutcasters are able to pick these cues up from
earlier shoutcasters instead of taking what they can from a sports show and hoping that
everything translates well.
Beyond the expected roles that shoutcasters fill, they also perform many secondary tasks
which don't typically fall to traditional sportscasters. In the very early days of live-streaming, shoutcasters were often responsible for every aspect of the broadcast from set-up to teardown. Some shoutcasters still regularly assist on production aspects of the broadcast such as graphics
E-Sports Broadcasting
60
packages, camera set-up, and audio checks, but others leave the production aspects of the stream
to more experienced hands while focusing instead on updating websites, answering tweets, creating content, or streaming their own play sessionss. No two casters seem to fill exactly the same role within the broadcast team. They do, however, share some similarities which seem to form the shoutcaster identity.
Record-keepers and Community Managers
All of the casters pointed to stats-tracking as part of their roles outside of their air-time
responsibilities. Most of them keep highly detailed databases full of every possible stat they can
get a hold of from game clients and public databases. These stats can be as simple as wins and
losses from remote regions or LAN tournaments that do not post their results online. The stats
can also get as minute as the number of units a particular Starcraft 2 player built in one particular
match. When the data isn't readily available, shoutcasters go out of their way to curate the
database themselves. While some keep their database secret to provide a personal flair to their
casting, others find it important to share this information with their e-sports communities. One
shoutcaster recalled his surprise when he first worked with a major South Korean e-sports
company with its own dedicated stats team. He expressed that he had never realized how much
he needed a dedicated stats team like you find in traditional sports until that moment. It was then
that he realized how much of his daily routine stats curation filled. While he was grateful for the
help, he also felt personally responsible for stats collection and did not entirely trust the figures
from the professional statisticians. This example shows the difficult position e-sports fills,
constantly stuck between borrowing from traditional sports while not fully able to cope with the
maturity of the sports media industry.
E-Sports Broadcasting
61
Another role which tends to fill a shoutcaster's daily routine is community maintenance.
Whether the caster creates their own content on gaming sites, responds to fans on social media,
or spends their time streaming and interacting with the community, they all mentioned some
form of community maintenance as part of their duties as a shoutcaster. This particular focus on
community maintenance most likely results from the grassroots origins of shoutcasters. These
casters were a part of an e-sports community long before they became shoutcasters. Whether
they view it as their professional responsibility or a social responsibility remains unclear. They
all admit to some level of e-sports advocacy, however. They view PR, and the proliferation of e-
sports as part of their responsibilities. The most effective way to tackle this issue, many of them
have decided, is through community engagement. The community aspect of shoutcasting identity
leads me to a discussion of the affordances of the hyper-connectivity in live-streaming.
Grappling with the Hyper-Connectivity in Live-streaming and E-sports
Shoutcaster Connection
I have yet to meet anyone in the e-sports industry who has not remarked on the unique
level of connection present in e-sports. Shoutcasters especially, tap into the network created in
these online communities. In a representative summary of my conversations, one shoutcaster
explained, "the connectedness is so unique in e-sports. The way that we can interact with fans
instantly. The players at the end of the day are gamers, they know exactly where to look.
They've got Twitter, they go on Facebook, they post on Reddit." Audience members connect
ephemerally in the IRC of a Twitch stream, but they constantly scour the social media outlets of
their favorite stars, e-sports companies, and shoutcasters, creating a deeply connected
community. Professional shoutcasters understand that the e-sports communities operate in a
E-Sports Broadcasting
62
unique way when compared to traditional sports fandom. E-sports fans have an odd connection
to franchises or teams within their chosen e-sport. As mentioned before, turnover rates and
general industry growth force entire communities to radically reform from one season to another.
Where traditional sports fans often follow a team based on geographic loyalty, or
familial connections, e-sports fans do not have that option. While you will often hear of fans
cheering for teams in their geographic region (North America, Europe, South-East Asia, etc) if
they make it to the last few rounds of an international tournament, they may also base their
fandom off of a team logo, or a particular player instead. Shoutcasters recognize this dynamic
and use it to cultivate the community.
Communication, they claim, separates them from traditional sports broadcasts or even
news anchors: "We communicate more with our audience than you'll see TV news anchors or
celebrities, but it's part of our job to get more information out there." The focus on
communication seems to be unique to shoutcasters as the majority of it happens outside of their
broadcasts. While many shoutcasters define their role on-screen as an educator of sorts, the
notion of spreading information about e-sports falls outside of their screen time. This double role
of broadcaster and community manager extends what media scholars have dubbed the
broadcasting persona beyond the point typically associated with sportscasters or news anchors.
Shoutcasters and Persona
Horton and Wohl (1956), two social scientists who study mass media, make the assertion
that mass media performers make a conscious decision to create and maintain parasocial
interactions through the creation of a persona. Social scientists have coined the term parasocial
interaction for the intangible connection which most of us feel to some form of media or another.
E-Sports Broadcasting 63
Standing in contrast to interpersonal interaction, a person to person exchange between two real
and cognizant human beings, parasocial interaction is instead a unidirectional relationship
(Miller and Steinberg 1970). The feeling of connection we create with fictional characters, news
anchors, or sports stars does not fall within the definition of an interpersonal interaction. Whether
mediated through a screen or the pages of a book, a parasocial interaction does not manifest in an
exchange of thoughts or words between individuals. Rather, it is embodied and lived through one
individual. Schiappa et al. (2007) conducted a meta-analysis of parasocial interaction literature to
better understand how broadcasters 'hook' viewers to a certain show. They concluded that
parasocial interactions can create and prolong connection to television programming. While
Schiappa et al. concede that there are a few opportunities for a parasocial interaction to result in
interpersonal relationships in the physical world, the compelling issue is the establishment of
intimacy mediated through means well outside of a person to person context.
Horton and Wohl set out with the goal of creating a term for the relationship between
performers and their audience in mass media. The authors suggest that the emergence of mass
media created an illusion of connection to performers which was previously unavailable. They
argue that the connection people feel to mass media stars is analogous to primary social
engagement. If this type of engagement takes place in radio and television, where users have no
opportunity to interact with audience members who are not co-present, it follows that the
interaction between broadcasters, their audience, and one another in a Twitch stream is a
particularly deep connection even beyond the level noticed by Horton and Wohl.
Shoutcasters create a familiar face and personality for audience members to connect with.
Mark Levy (1979), another proponent of parasocial interaction who focused his work on news
anchors, suggests that both news anchors and sportscasters help to create and maintain
E-Sports Broadcasting 64
communities through regular scheduling, conversational tones, and the creation of a broadcasting
persona. Shoutcasters perform this same role to even greater effect due to the constant changes
surrounding the e-sports industry. The regularity and consistency of shoutcasters' broadcasts
helps to foster a feeling of genuine connectedness within the community.
Although difficult to quantify, many conversations with shoutcasters turned to the odd
feeling of connection that e-sports fans feel towards one another. One shoutcaster attempted to
explain this connection by stating, "[w]henever I go to an event, I realize that fans are just
friends I haven't met yet." I found this statement to be particularly poignant. It hints to the sort of
intangible connection e-sports industry personalities and fans feel to one another through live-
streams. Anecdotally, this air of friendship permeated e-sports events that I have attended and
went well beyond what I have felt at traditional sporting events or concerts.
Previously, persona creation and maintenance occurred on-screen or at events only.
Social media has forced many media personalities to extend their personas beyond the long-held
notions of broadcaster-fan interaction. In many ways, shoutcasters must go beyond even these
extended boundaries into a near constant persona maintenance because of their roles in live-
streaming and community maintenance. Many shoutcasters give up their personal, off-air time to
stream their own gameplay or to create video content which necessarily prolongs the amount of
time they embody their broadcast persona.
I found that shoutcasters create a variation on the broadcast persona. Rather than a full-
blown broadcasting personality which they inhabit while on-air, most shoutcasters have found
that between community management, social media interactions, and broadcasts, they almost
never get an opportunity to step out of their role as a shoutcaster. Due to this near constant
connection, most shoutcasters acknowledge that they act differently on air, but they tend to
E-Sports Broadcasting 65
simply invoke a more upbeat and charismatic version of themselves. Echoed in each of the
interviews, the casters point to the idea of excitement, "you have to get excited for the person out
there watching." Even if they are not in the mood to shoutcast, or they have had a bad day,
shoutcasters must leave their personal issues out of the broadcast. This aspect of the
shoutcaster's personality comes out in all of their interactions on social media as well.
Most of the shoutcasters I interviewed situated their role in e-sports as somewhere
between Public Relations, Marketing, and Community Management. One of the casters
explained the importance of invoking the broadcast persona when speaking about sponsor
expectations: "We're working in an industry with companies behind us, we can't always say
exactly what we want to say." Shoutcasters' acknowledgement of their involvement in securing
sponsorships signals an interesting shift in the e-sports industry: the focus of the broadcast team
on potential revenue generation. I turn now to an analysis of the revenue streams found in both
traditional sports and e-sports broadcasting.
E-Sports Broadcasting
66
Chapter 3
Revenue
Funding Professional Play
After situating e-sports broadcasting within the greater sports media landscape,
particularly in conventions, casting, and use of medium, it is important to analyze the portions of
sports media production that have made their way into e-sports broadcasting. If we acknowledge
the influence that traditional sports broadcasting has had on e-sports broadcasting in the realms
of conventions and casting, we must also understand the importance of this relationship at the
production and economic levels. In this chapter I discuss how the history and development of the
sports media industrial complex in the U.S. has bled into the economics of the e-sports industry.
In particular, I focus on how sports media models inform the e-sports industry while portions of
the sports industry's revenue streams remain out of reach for e-sports broadcasters. Despite the
reshuffling of the sports media industrial complex mentioned in the introduction to this thesis,
traditional sports broadcasting still relies on the same revenue streams that it had in the past.
Traditional sports producers have fully capitalized on the commodification of their content. E-
sports producers, in contrast, are still shaping their revenue streams within live-streaming. The
commercialization found in the sports media industrial complex has taken hold of the e-sports
industry in several notable ways. Following in the example set by Stein's thesis work, it is not
enough to just acknowledge the relationship between e-sports and traditional sports media, we
must also understand the path which brought e-sports broadcasting to its current state. | E-Sports Broadcasting
8
Introduction
Sportscasters on a Digital Field
Sitting at a desk underbright lights, two announcerstalk at afast clip. After a weekend
full of commentating, theirvoices are scratchyandfading, yet theirexcitement never wanes. No
one watchingcan see the two men, though a camerasitsjust afew feet infront ofthem. Instead,
the live audience andhome viewers see the Europeanchampions, Fnatic,going head to head
with SK Gaming on a virtualbattlefield. They're 55 minutes into an absoluteslugfest, the two
announcers'voices rise andfallwith the action ofthe game. Over the PA, the audience hears
that this game is mere seconds awayfrom ending. The SK team has Fnaticon the ropes after
brilliantlydefending their base. Fnatic'sstarplayer, Xpeke stays, attempting to win the game
singlehandedly.
The casters initiallydismiss the lastditch effort while the bulk of SK's team move to end
thegameontheothersideofthemap.However,thecamerastaysonXpeke whoisina
showdown with one memberofSK. NanosecondsawayfromdefeatXpeke dodgesa deadly
ability. The casters erupt in nearly unintelligible,frantic excitement as the 25,000 live attendees
atSpodek Arena in Katowice, Polandcheerat the sudden Fnaticvictory. Back in the realworld,
theentireFnaticteamjumpsawayfrom theircomputersandpileontoXpeke whilewe hear, "I
do not believe it! Xpeke's done it!" Over 643,000 online viewers around the world watch the
camerapan acrossthe SK team, stunnedin theirdefeat. From theirhome computers, these
viewers have just witnessed e-sports history.
E-Sports Broadcasting 9
The above scene unfolded at the 2014 Intel Extreme Masters World Championships in
League of Legends, a popular e-sports title. The solo maneuver that Xpeke performed on that
stage has since made its way into common LeagueofLegends vernacular, being invoked in any
match, casual or professional, where a player deftly ends a game singlehandedly. E-sports, which
encompasses many more titles than League of Legends, has become a cultural phenomenon of
sorts. People may wonder whether the whole scene is just a flash in the pan or something more
significant.
I begin this thesis in much the same way that I have begun many conversations over the
past two years: defining e-sports. In most of those conversations, I simply say "professional
video-gaming" and move on to other topics. Here, though, I fully elaborate on what e-sports
means. More than just professional gaming, e-sports is an entire industry created around
competitive gaming at all levels of play. An e-sport is not a just a sports video game like the title
might suggest, though some e-sports titles are sports video games. Instead, e-sports titles are
meticulously balanced, competitive, multiplayer games. Many games would fall into this
category, but it takes a community of people to take an e-sport to the level of the classics like
Counter Strike and Starcraft.
Such communities are core to the identity of e-sports. Indeed, this identity itself is an
oxymoronic collision of geek and jock culture; a mixture that media would have us believe acts
like oil and water. Even within e-sports communities lines are hazy and misdrawn. As Taylor
and Witkowski (2010) show in their study of a mega-LAN event, the e-sports scene is fraught
with identity issues not only from outside, but within as well. The jock-like first-person-shooter
(FPS) players competing at the same event as the nerdy, enigmatic World of Warcraft players
E-Sports Broadcasting
10
shows the conflicting, lived masculinities in e-sports. Players are unsure whether to act like
superstar athletes or tech-geeks. Can you be both?
The word e-sports alone evokes such a conflicting image. Electronic sports seems almost
paradoxical in nature. Have we moved beyond a physical match of skill and extended our
contests to avatars in a digital world? How can two players sitting at a desk be sporting? As e-
sports continue to grow not only as a segment of the gaming industry, but as a spectator affair,
we begin to see the 'sports' side of e-sports both challenged and invoked more frequently. In a
telling case, Twitter erupted after a Dota 2 tournament made an appearance on ESPN 2 in
2014. With $10 million at stake, many e-sports fans thought the event warranted the attention of
the all-sports network. Plenty of viewers took to social media to praise the move made by ESPN.
Others were shocked: "Espn2 is seriously airing an online gaming championship? Wtf man. This
is our society now. That is not a sport" (Hernandez 2014). The sports status of e-sports has been
both defended and attacked by journalists, academics, and fans alike.
The debate about the status of e-sports has been raging for many years. Witkowski's
piece, "Probing the Sportiness of E-Sports", presents both sides of the argument pulling from
games studies scholars and assessing e-sports on their terms. Ultimately though, I believe she
shelves the debate deftly when she states, "sport is a personal experience... as many a sporting
scholar has written before - if an individual considers the sporting activity they are engaged in to
be a sport... then it is a sport" (2009, 56). I do not wish to rehash this debate. I have no stake in
it. As Witkowski asserts, the attempt would be futile. Instead, I accept the role traditional sports
have played in the shaping of e-sports.
In fact, exploring the relationship between e-sports and their traditional counterpart drives
this work. In what follows, I argue that the sports media industrial complex has fundamentally
E-Sports Broadcasting
11
shaped the current e-sports industry. Beyond this grounding, e-sports broadcasters constantly
borrow from traditional televisual broadcasts, using models that they feel to be appropriate for
their medium. Regardless of whether e-sports qualify as sports or not, they are constantly
informed by sports broadcasting and follow a trajectory set out by traditional sports models.
This work comes about at in an interesting moment in e-sports history. E-sports
audiences have never been larger, Riot games boasted an impressive 27 million viewers for the
League ofLegends World Championship in 2014 while the 2015 Intel Extreme Masters world
championship saw over 1 million concurrent viewers across multiple live-streaming platforms
(Riot Games 2014; ESL 2014). An old classic, CounterStrike, has re-emerged, albeit in a new
package. The audience it continues to draw proves that some titles have staying power in this
fickle industry. At the same time, a new title, League ofLegends, consistently pulls in over
100,000 concurrent viewers for its weekly shows in the U.S. and E.U. As the League ofLegends
Championship Series moves into its fifth season, it has come to resemble a traditional sports
broadcast more than it does its fellow e-sports shows. A new addition in Season 5, a segment
called Prime Time League (PTL) is nearly indistinguishable from ESPN's Pardon the
Interruption (PTI) at a glance.
Figure 1-Left Image: Prime Time League; Right Image: Pardon the Interruption
E-Sports Broadcasting 12
Comparing these two images reveals the level of sports emulation found in e-sports broadcasting
today. From the stats and schedule ticker at the bottom of the screen to the show rundown along
the edge of the screen, an uninitiated viewer would have difficulty distinguishing between the e-
sports show and the traditional sports show.
A steady influx of television producers and directors are starting to shape an industry that
already has an identity crisis while still investigating how best to harness the new medium of
live-streaming. These assertions are not meant to give the impression that we stand on the edge
of wholly untouched land as pioneers in a new frontier. As shown in the e-sports literature
review to follow, the e-sports industry has a history of evoking the feeling of standing on a
precipice.
Organization
In the introduction, I first provide a brief history of e-sports and take note of the
directions e-sports scholarship has pursued. Following this review, I introduce the sports media
industrial complex to better situate e-sports broadcasting within the larger media landscape of
sports broadcasting: the focus of chapter 1.
The first chapter begins by looking at the long history of sports and media. By
introducing the full gamut of sports media, I am better able to investigate how e-sports
broadcasting stays in conversation with each of its predecessors. As evidenced in the reshuffling
of sports media through history, we can see that e-sports make use of all of these forms of media
while creating something new. During this chapter, I look to the transition moments in traditional
sports broadcasting as the foundation ofthe e-sports industry. Moments of tension and doubt
within the sports media industry as it shifted from one medium to another provide perfect lessons
E-Sports Broadcasting 13
to be learned by the e-sports industry as they struggle with some of the same issues found in the
reshuffling of media history. Indeed, while making use of the same media through journalism,
public relations, and audiovisual broadcasts, the e-sports industry constantly wrangles with the
use of the newly emerged medium of live-streaming. Television especially influences live-
streamed broadcasts, which e-sports broadcasts tend to approach with the same framework as
television.
Chapter two focuses on e-sportscasters, also known as shoutcasters. I begin the chapter
with a brief look at the history of shoutcasting. Considering that many of the early shoutcasters
pull solely from traditional sportscasters, understanding their influences is crucial in
understanding how e-sports has evolved in the way it has. As, I argue, the single most pointed
signaling of the sportiness in e-sports, these individuals have pushed the e-sports industry
towards a sports model. When first time viewers or listeners leave an e-sports broadcast with the
distinct feeling of a sports broadcast in their mind, it is the shoutcasters doing their job. They rely
heavily on conventions set by traditional sportscasters. Much like their predecessors when faced
with something new, shoutcasters borrowed what they could and innovated when there was
nothing to borrow. Chapter two also focuses on shoutcasters' formulation of their identity within
the e-sports industry as personalities, professionals, and record-keepers. Shoutcasters are just
now creating an identity separate from traditional sportscasting. Where veteran shoutcasters
relied primarily on traditional sports broadcasts, newer casters look instead to other shoutcasters.
These shoutcasters are reshaping their identity while attempting to fully embrace the new
medium of live-streaming.
The third and final chapter tackles the topic of economics in e-sports. As the history and
trajectory of sports broadcasting has profoundly affected the e-sports industry, many of the
E-Sports Broadcasting
14
economic models present in traditional sports bled into the e-sports industry as well. The e-sports
industry in the US and Europe has yet to be analyzed as such. Some work (Taylor 2012) has
focused on e-sports revenue streams including sponsorships, company models, and team
ownership, but overall, the subject remains underexplored. Dal Yong Jin's (2010) analysis of the
political economy of e-sports in South Korea offers a tool set for this chapter. While the South
Korean e-sports model spawned out of an extremely particular set of circumstances that cannot
be readily applied to the U.S. or E.U. e-sports scenes, Jin's investigation of the surrounding
economic systems surrounding e-sports translates well to my own investigation of the U.S. and
E.U. industries. As staggering prize pools continue to make headlines, it is easy to lose sight of
the economic system working behind the scenes to keep e-sports financially salable, or in some
cases not. The third chapter delves into traditional sports economics and their influence on the e-
sports industry. In some areas, the models translate perfectly. In others, e-sports has been unable
to tap into the same revenue generators as traditional sports. Unless some developments
significantly alter the e-sports industry, it may be more tenable to pursue other models instead of
the sports industry.
Methods
This thesis makes use of many qualitative methods including historical analysis,
interviews, and fieldwork. To grasp the significance and situation of e-sports broadcasting in its
current state fully, one must analyze the same developments in traditional sports broadcasting.
As one takes a deeper look into the past of the professional sporting industry, its influences on e-
sports become clear. A feedback loop has been created between the two. Historical analysis
offers a glimpse at key moments which defined the incredibly successful global sports industry.
E-Sports Broadcasting 15
Not only are similar situations appearing in e-sports, but e-sports pushes back into each of the
investigated forms of media. A few of the issues currently facing e-sports could be resolved
through following the path established by traditional sports, while other issues have been caused
because so much has been borrowed.
I also had the pleasure of conducting seven interviews with professional shoutcasters. I
limited the selection of shoutcasters to full-time professionals, rather than amateurs, to get an
insight into how these new professionals view their role within the industry. Roughly half the
participants are veteran shoutcasters of five or more years. The other half have joined the scene
more recently with one in particular having shoutcasted professionally for less than one year. As
these informants are a few of only dozens of professional shoutcasters in the world, I have
attempted to keep their identities anonymous. As professional personas, some of these casters
may benefit from being associated with this work, but I do not want to run the risk of potentially
linking these shoutcasters with their statements in the event that this information could somehow
affect the community's perception of the individual or potentially harm their prospects within the
e-sports industry. The conversations were all positive, but one can never truly assure their
informants that information they have provided in confidence will have no repercussion in any
foreseeable future. With these considerations in mind I decided before conducting the interviews
that the informants would remain anonymous.
Finally, I was also able to spend time working within the e-sports industry. My time spent
working for a prominent e-sports company profoundly shaped this thesis. Working alongside
industry professionals sparked countless conversations about the current climate of the e-sports
industry and possible futures. These conversations have both helped and challenged my thinking
about the e-sports industry. While I often refer to the e-sports industry or community as a
E-Sports Broadcasting 16
homogenous whole, the professionals who live within the space are not all of one mind and it
would be a mistake to present them that way. Within e-sports, there are many different games
and communities vying for viewers, players, and attention. What follows is my best attempt at
wrangling the many paths e-sports has started to follow.
E-sports Literature Review
E-sports is still a young industry and an even younger subject of critical inquiry. Most
entries into e-sports scholarship have emerged within the last five years. E-sports literature tends
to come from the much older tradition of games studies, but ties into many other fields including
the social sciences, cultural studies, economics, and law. Professional-gaming literature is a
veritable hotbed of potential research topics with more articles, theses, and dissertations
appearing every year. Much of the growing body of e-sports literature focuses on the
professionalization of gaming (Jin 2010; Mora and Heas 2005; Swalwell 2009; Taylor, Nicholas
2009; Taylor, T.L. 2012; Witkowski 2012). These histories offer much more than a rundown of
the events that created the e-sports industry. They also offer insight into our contemporary social
moment. The arrival of a professionalization of video gaming signals many significant
developments within both western and non-western culture. The global nature of e-sports and its
meshing together of complex and often conflicting identities continues to beg investigation.
E-sports literature primarily resides within the social sciences. Many cultural analyses in
e-sports (Chee and Smith 2005; Harper 2010 and 2014; Hinnant 2013; Swalwell 2009; Taylor
2011) have focused on the communities growing within different scenes. Todd Harper, for
instance, investigates the culture of competitive fighting games, a fascinating community which
stands both within and at odds with the rest of competitive gaming. Gender studies are also
E-Sports Broadcasting 17
becoming increasingly common within e-sports literature (Chen 2006; Crawford 2005; Leonard
2008; Taylor 2009 and 2011; Taylor and Witkowski 2010; Witkowski 2013). With the
fascinating and fraught formulation of masculinity within these spaces as well as the perceived
absence of femininity, gender studies are incredibly important within e-sports literature. Nicholas
Taylor (2011) offers insight into the ability of e-sports to create embodied performances of
masculinity at live events which spread through communities specific to certain titles or genres.
Taylor and Witkowski (2010) also show the conflicting versions of masculinity that appear in
different e-sports genres.
There has also been an increasing focus on e-sports as a spectator activity. Jeff Huang
and Gifford Cheung (2012) found in a study that many of the e-sports fans they investigated
prefer watching high-level play rather than playing a match themselves. Kaytou and Raissi
(2012) also investigate spectatorship in e-sports with a focus on how best to measure live-
streaming audiences. Others (Bowman 2013; Gommesen 2012; Kow and Young 2013) show that
the audience in e-sports has a profound effect on performance for the players, akin to a
traditional sports audience. These scholars also investigate the expertise apparent in e-sports
players that is passed on through spectating as often as practicing.
As the professional play of video games fascinates so many, e-sports literature has
understandably focused primarily on professional players. Notable exceptions include Jin (2012)
and Taylor (2012) who, while still heeding players, also investigate the surrounding factors
which allow for play at a professional level. Without these other factors, professional players
would not exist. It is from the tradition of these two authors, among others, that I base this work.
This thesis, like many of the works listed above seeks to better understand the phenomenon of e-
sports while analyzing a particular segment of the scene. With few investigations into the
E-Sports Broadcasting 18
broadcasting of e-sports, I hope to contribute to e-sports literature in a way that is both unique
and replicable to other systems found within the larger e-sports framework.
Sports Media Industrial Complex
As sport and media become increasingly intertwined, it becomes difficult to analyze one
without at least acknowledging the impact of the other. Pointing to the inextricable link between
sports and media, sports media scholar K. Lefever (2012) argues, "while sport provides valuable
content and audiences for media operators, the media is a revenue source and promotional tool
for sport." As such, the steady professionalization and, in turn, commercialization of sport relies
heavily on its media counterpart. The subsequent interdependence between media outlets,
sponsors, and sports leagues creates what is often referred to as the sports/media complex or
sports media industrial complex (Jhally 1989, Rowe 1999, Maguire 1991). Wenner (1989)
coined the neologism, MediaSport, to define the deeply rooted relationship between sports and
media. The two can hardly be considered separate anymore.
Stein (2013), a Comparative Media Studies alumni, building on the work of these earlier
scholars created a model which could be applied to new arrivals in the sports media landscape.
Thankfully, Stein provides a fairly replicable analysis of sports video games within the broader
sports media landscape. His investigation of the relationship between televisual sports video
games and sports media largely informs my own work. He notes an almost relentless stream of
advertising and commercialization rhetoric appearing in sports video games. Building on the
work of Wenner, Rowe, and Jhally, he argues that the commodification and capitalist trends
found in traditional sports broadcasting bleed into newer media such as video games. This steady
influx of advertising and commercialization can be found in e-sports as well.
E-Sports Broadcasting 19
As e-sports broadcasters gain more experience and access to more robust technology,
they have started to incorporate many of the same commercial opportunities Stein noticed in
sports video games. Segments of the broadcast are occasionally sponsored, or one might see a
sponsor make an appearance in an event's title such as the Intel Extreme Masters tournament.
Where Stein argues that sports video games incorporate these advertisements as a signifier of
their televisual legitimacy, I argue that e-sports broadcasters make use of the same strategies
because they are informed by earlier forms of sports media.
The steady commercialization found in e-sports reveals the influence that the sports
media industrial complex has had on the e-sports industry. In documenting the dynamics of the
sports media industrial complex, Jhally (1989) argues that sports are best viewed as
commodities. Jhally's model focuses on the sporting industry in the US prior to the emergence of
new media. More readily applicable to e-sports, Lefever's (2012) analysis of the sports media
complex within new media details a phenomenon which has upended the former relationships
between stakeholders in the sports media industrial complex. She claims that, "the sports/media
complex has somehow changed, allowing the different stakeholders to take up new roles"
(Lefever 2012, 13). The stakeholders, including sports franchises, sponsors, and media outlets,
have had to adapt to a new media landscape with new roles. These new roles are more transient
within the high-demand world of new media. Sports organizations and franchises have taken a
more active role in connecting with fans, media outlets have taken a larger interest in sports
franchises (often buying sports franchises if it is less expensive than purchasing media rights),
and sponsors have taken advantage of new, innovative ways to reach consumers (Lefever 2012,
21). According to sports scholars Haynes and Boyle (2003), television sports viewers are no
longer expected to just sit back and relax. Instead they are expected to follow their sport through
E-Sports Broadcasting 20
social media, forums, blogs, and other digital outlets. This new, active fan fits well within the e-
sports industry and live-streaming, but has changed the traditional sports media industrial
complex. Before delving too far into the role of traditional sports economic models on e-sports,
however, I will first situate live-streaming and e-sports within the larger sports media industrial
complex.
E-Sports Broadcasting
21
Chapter 1
Sports Media in Transition From Print to Live-Streaming
Every day, millions of Americans are catching up with the latest sports news through
print, radio, television, and online. Sports have saturated the entire spectrum of mass media in
the US. With the emergence of each form of mass media, sports coverage has been at the
forefront of adoption and innovation (Bryant and Holt 2006, 22). Each major medium shift in the
US has been accompanied by a massive reshuffling of the sports media landscape. Often, this
reshuffling opens a space for a particular sport to take up the new medium, create conventions,
and carve a path for others to follow. These sports were not spawned by mass media, but their
spike in popularity around the emergence of a new medium indicates very specific social
moments in the US. Early sports magazines and print coverage of sports focused primarily on
prize-fighting, radio ushered in the golden era of baseball, and television transformed football
into a titanic entertainment industry. The rise and stabilization of sports media are as much a
product of available technology as they are indicative of societal preoccupations of the time. If
sports and sports media are indicative of our social moment, then what can we glean from the
arrival of live-streaming and e-sports?
The co-evolution of sports and media is the coalescence of many factors including
changes in power structures, modes of production, and available technology. As Bryant and Holt
argue in their investigation of the history of sports and media, "[e]ach epoch of social evolution
has witnessed important sports-media developments that were affected by the evolving socio-
cultural environment" (2006, 22). In what follows, I trace the co-evolution of sports and media
with particular focus on the relationship between emerging mass media and the media ecology
E-Sports Broadcasting 22
surrounding that emergence. By documenting these moments of turbulence, I establish the
framework necessary to analyze live-streaming as a new medium with which e-sports has
emerged as an early adopter and convention creator. Live-streaming did not emerge
independently from its predecessors, but rather delivers on the preoccupations of our current
social moment. It has once again started a reshuffling of the roles of media within the sports
media complex. E-sports, while primarily viewed through live-streaming, relies on all of the
previous forms of media to varying degrees. With this framework in mind, I argue that the
feedback between live-streaming, e-sports, and traditional sports has spawned an industry which
roots itself in traditional sports media while still investigating the full potential of live-streaming.
I begin by briefly discussing sports media in antiquity with Thomas Scanlon's (2006)
piece on ancient Mediterranean sports and media. After this introduction to sports media, I move
to the US in the late eighteenth century with the emergence of the first sports-only publication,
the sports magazine, as well as early print news coverage of prize fighting during the rise of
industrialization and nationalism. The next section maps the push towards immediacy in sports
coverage and the rise of radio. On the heels of radio and the golden age of baseball, I discuss the
early issues with televised sport before the post-war era. Moving into the 1950s and 1960s, I
detail the transformation of football into a televisual sport accompanied by a very specific social
contingency. I then transition into an investigation of live-streaming and e-sports, particularly
how both are in conversation with sports media history.
Origins of Sports Media
As classicist Thomas Scanlon (2006) posits, there is no history of sports without its
media counterpart. Media in antiquity, he argues, "are a tool of society, a means of transmitting a
message, primarily one from the rulers to the ruled" (Scanlon 2006, 17). While his definition is
E-Sports Broadcasting 23
quite limited, Scanlon is correct in noting that media are inflected with the power structures of a
society. Sports as media were classically used by those with power to reinforce the hierarchy.
Sports events were "represented as a benevolent benefaction from the rich, noble, and
empowered to those marginalized" (Scanlon 2006, 18). This reinforcement of power structures
comes through not only in the production of sporting events, but also in the medium itself.
Scanlon suggests that the most powerful sports 'medium' in classical times was Roman
architecture. The massive circuses and arenas were meant to "provoke awe, admiration, and
obedience in the citizens" (Scanlon 2006, 18). Scanlon establishes that the predominant sports
medium in a given society correlates directly with their notions of power. Within the realm of
more dispersed authority such as the Ancient Greeks, sports media reflected the high value of an
individual and his merits. Depictions of athletics in Ancient Greek poetry and pottery, made by
and for the common people, focus on a particular athlete's prowess more than the event itself. On
the other hand, societies with incredibly rigid hierarchies and god-kings such as the Ancient
Egyptians and Persians, tend to represent sports as a demonstration of the ruler's power over
their people. Ancient Rome, with its centrally focused authority, used architecture to demonstrate
the power of the nobility as both benefactors and arbiters, diminishing the role of the athlete to
that of an entertainer. Moving into more recent history with media such as newspapers and radio,
Scanlon concludes that sports media became an amalgamation of both the Roman and Greek
styles: large spectacles with massive personalities.
E-Sports Broadcasting 24
Establishing a Media Landscape: Early Sports Media in America
The importance of the printing press on modem society cannot be overstated. While its
precise effects are still being debated', the affordances of the printing press allowed individuals
to produce and disseminate a massive amount of information far more efficiently than ever
before. With a massive rise in literacy rates and increased access to print brought about by the
printing press, the reading population of the world shifted (Eisenstein 1983). While early
readership was restricted to a very small subset of society, the printing press paved the way for
the coverage of more mundane topics such as sports. In their analysis of sports media in pre-
industrial America, sports media scholars Jennings Bryant and Andrea Holt point to two major
developments: first, the appearance of sports in newspapers as 'general news' and second the
creation of a completely sports-centered publication: the sports magazine (2006, 22). The advent
and success of sports magazines in the early nineteenth century stands as a marker for some of
the intellectual shifts of the industrial era. During this time we see a professionalization of sport
in the form of prize fighters. We also see a shift from sports as a local leisure activity to
something that one follows from a distance. Sports contests began to take on implications
beyond a mere matching of athletes.
Many sports magazines started out as independent, one-person operations that began
circulation in the 1820s and 1830s (Bryant and Holt 2006, 22). The Spiritof the Times, one of the
earliest iterations of the sports magazine, actually reached a circulation of over 100,000 readers
by the 1840s. The success of this initial sports-focused publication displays the roots of the
American sports media tradition. While they note the significance of sports magazines in the
overall climate of sports media in America, Bryant and Holt trace the advent of modem sports
1See Elizabeth Eisenstein. 1983. The Printing Revolution in Early Modern Europe. New York: Cambridge University Press.
E-Sports Broadcasting 25
media to recaps of prize fighting in the Penny Press age of the 1830s. With increased circulation
to the middle and lower classes, sports coverage increased substantially in the mid-nineteenth
century. Sports coverage in the Penny Press era focused on creating spectacular depictions of
sporting events. As McChesney, a media historian points out, James Gordon Bennett, owner of
the New York Herald,was "one of the first exponents of 'sensationalism' as a means of
generating circulation, and sport fit comfortably within this rubric" (1989, 51) Out of the
sensationalism present in these early newspapers, sports began to take on more significant
cultural meaning.
There was particular focus on regionalism and nationalism. Sports media scholar J.
Enriquez explains that sporting events were far more likely to be covered if they featured a
contest which reflected the social preoccupations of the day such as a northern horse racing
against a southern horse, or an American boxer fighting a European (2002, 201). Through these
mediated depictions, sporting events were encoded with much more meaning than a simple
contest. They reflected the contemporary hopes and anxieties of the people. Sports media built
up athletes as representatives. Newspaper recaps did much more than simply describe the
actions; they created dramas (McChesney 1989, 51). The hyped up imagery of athletes and their
contests created through the Penny Press and sports magazines became the paradigm for sports
coverage for decades while a new sport caught America's attention.
Newspaper Sports Writing and the Rise of Team Sports
The rise of baseball as a national pastime coincide with the period of time just after the
American Civil War. McChesney explains, "The Civil War introduced baseball to an entire
generation of Americans, as the troops on both sides played the game when time permitted.
Indeed, baseball emerged as the preeminent national team sport during this period" (1989, 52).
E-Sports Broadcasting 26
After the Civil War, baseball helped mediate conflict by providing common ground for
northerners and southerners. This moment was one in which the country was seeking to heal its
rift, looking for neutral things that could bind the nation together. Baseball filled a political
agenda by giving people something to focus on without opening old wounds. Sports writing
changed drastically in the years following baseball's spike in popularity. Sports coverage began
to receive regular columns and increased coverage throughout the late nineteenth century,
leading to a new kind of journalistic specialization: the sports-writer (Enriquez 2002, 202). This
fixation on sport was a result of new socio-cultural environments. Mandelbaum (2004), a sports
media scholar and historian, argues that the industrial revolution created a new sports landscape
through several major developments. First, the notion of childhood had expanded. In the
nineteenth century, the period between birth and entering the workforce increased substantially.
The new notion of childhood permitted more people to engage with baseball, football, and
basketball. This increased interest in team sports continued into adulthood. Watching and reading
about sports in the newspaper or sports magazines became an acceptable way to recapture the
"carefree years of their lives" (Mandelbaum 2004, 2). Mandelbaum also argues that baseball
offered a renewed connection to pastoral America, creating a feeling of nostalgia for the new city
dwellers and factory workers who desperately missed the pace and beauty of rural America.
Baseball coverage created the first major feedback loop between sports and media in
America. Bryant and Holt claim that the importance of sport was downplayed significantly in the
puritan era, but, "regular, routine reporting of sports in newspapers and specialized magazines
helped shift the cultural attitude towards sports in general" (Bryant and Holt 2006, 25). They
argue that in the late 1870s through the 1890s, Americans adopted a new stance on sports as
important for the development of mind, body, and society. This new cultural stance on sports
E-Sports Broadcasting 27
was shaped and fostered by an increased media coverage of sports. As baseball and its media
coverage became more professionalized, Americans began to consume sports media in
completely different methods. Sports spectatorship became a regular and acceptable pastime for
the industrial worker.
The industrial revolution created the first opportunity in America for sports production
and spectatorship to be commercially successful endeavors. The growth of cities and the massive
developments in individual mobility allowed for sporting events to take on new significance
(Mandelbaum 2004, 3). Cities provided large numbers of sports players as well as spectators to
fill newly built stadiums and watch newly formed teams. Sports fandom in the U.S. fit neatly
into the predominant forms of labor and leisure. Zillmann and Paulus (1993), two psychologists
who wrote on sports spectatorship, explain, "spectatorship, as a significant form of recreation, is
an outgrowth of the monotony of machine-dictated labor, sports events became the weekend love
affair of all those whose workday was strictly regulated by production schedules" (601).
Zillmann and Paulus' article further supports the feedback between sports media consumption
and societal structures. Live spectatorship in America had previously been seen as a luxury for
the rich and powerful, but with the increased circulation of newspapers, and in particular sports
coverage, to the middle and lower classes, sports spectatorship became accessible to an entirely
new sector of the population (Bryant and Holt 2006, 21). Architecture once again emerged as an
important medium. Large concrete and steel stadiums were created, replacing the more
organically created playing fields of the late nineteenth century (Mandelbaum 2004, 52). We see
here an important transition into the production of sport as a money making opportunity. As I
discuss in the third chapter, the introduction of investors and producers fundamentally alters
sports and their media counterparts.
E-Sports Broadcasting 28
The available media shaped the portrayal and perception of athletics in the industrial era
as well. The idea may sound a bit romantic, but Benjamin Rader (1984), a sports scholar focused
on the transformation of sports media in America, labels the period of sports media prior to
television as an era of heroes. Whether speaking of prize-fighters or the Mighty Casey of
folklore, sports media in the industrial era painted athletes as larger-than-life characters. Rader
claims, "[t]hose standing on the assembly lines and those sitting at their desks in the
bureaucracies increasingly found their greatest satisfaction in the athletic hero, who presented an
image of all-conquering power" (1989, 16). To Rader, sports media before television presented
the American ideal. Athletes were meritocratic role-models playing for the love of the game.
Rader's analysis places the impetus on newspapers to depict dramatic stories with characters
akin to David and Goliath.
In addition to individual mobility, urbanization, and industrial work, Enriquez attributes
the rise and legitimacy of sports journalism as the catalyst for the nationalization of sports in
America (2002, 201). As all forms of communication and nationalization were transforming,
sports coverage lead the charge. In the early twentieth century, most newspapers had dedicated
sports writers on staff. These sports writers became famous through their innovative and
entrancing writing. Writers like W. 0. McGeehan, who worked for many San Francisco papers,
described athletes as sorrowful sages and their contests as the clashing of titans on a battlefield
(Nyhistory.org 2015). In this period however, it is difficult to judge the difference between
journalism and public relations (Bryant and Holt 2006, 30). In fact, the issue of PR penetrating
journalism in the late nineteenth to early twentieth century is explicitly laid out in Michael
Schudson's (1981) chapter, "Stories and Information: Two Journalisms in the 1890s". At the turn
of the century, there existed a dichotomy between news as entertainment and news as
E-Sports Broadcasting 29
information. As papers around the country struggled to define themselves, sports media also
went through a defining period. Legitimate sports writing became known for its higher literary
quality, but read more like advertisements with its exaggerated, often hyperbolic, language.
Public relations soon became as much a part of sports journalism as describing the events
themselves. Team owners understood the media's role in keeping attendance at sporting events
up and began catering to sports journalists for coverage (Enriquez 2002, 206). The team owners
expected sports journalists to act as publicists for their events. The gambit paid off as sports
writing filled more and more of the daily papers and attendance at live events continued to rise.
The sports writers added significance to the experience of watching a sporting event. Between
the shifts in the American middle class, leisure activities, and the flowery language of sports
journalism, watching a sporting event began to take on the significance of watching history
unfold. We will see these same issues appear again in e-sports coverage as journalism becomes a
legitimizing force within the e-sports landscape, torn between deep analysis and hyped-up
depictions for the sake of generating publicity.
Liveness continued to assert its role in sports media as new technologies emerged. The
telegraph especially placed the impetus on news sources to provide timely information. In a
fascinating illustration of the desire for timely sports news, the ChicagoTribuneran the
following note on March 17, 1897, the day of the legendary boxing match between Jim Corbett
and Rob Fitzsimmons: "The Tribune will display bulletins today on the prize fight. It has secured
a telegraph wire to the ring in Carson City and a competent man will describe the progress of the
fight, blow by blow, until the test is decided. The bulletins will be posted thirty seconds after
they are written in the far Western city" (Bryant and Holt 2006, 29). This fixation on live updates
for sporting events across the nation is another example of how sports media has shaped the
E-Sports Broadcasting 30
media landscape of America. Information began traveling faster than ever via wireless
transmissions, but it was actually a yacht race which saw one of the very first implementations of
wireless for live information transmission. Sporting events saw some of the earliest uses of the
telegraph for news reporting as well (Mott 1950, 597). As the telegraph allowed for a sense of
liveness even for remote events, it paved the way for the most significant development in sports
media prior to television: radio.
A Fixation on Liveness: Radio and Sports Consumption
Radio delivered on the push towards liveness established by the telegraph. The first
broadcast of a Major League Baseball game occurred within a year of the commercial release of
radio (Enriquez 2002, 206). Rader remarks, "Now the fan did not have to await his morning
newspaper; he instantly shared the drama transpiring on the playing field" (Rader 1984, 23). For
the first time, sports were perceived as home entertainment. Broadcasters as well as businesses
capitalized on the shift. Sports coverage was integral to the rise in popularity of radio in the
interwar period. In Rader's words,
In the pre-television era, the heroes of sports assisted the public in coping with a rapidly changing society. The sports world made it possible for Americans to continue to believe in the traditional gospel of success: that hard work, frugality, and loyalty paid dividends; that the individual was potent and could play a large role in shaping his own destiny (1984, 15).
By Rader's account, sports programming on radio delivered a much needed revitalization
of the American ideals through the transient industrial period and The Great Depression.
The rise of radio coincides with the golden age of baseball, but there was an awkward
transitional phase into the new medium while newspapers and radio both tried to define their
new boundaries. While consumers clearly desired liveness, initial radio broadcasts felt flat and
emotionless (Bryant and Holt 2006, 27). Some of the greatest blow-by-blow sports writers were
E-Sports Broadcasting 31
terrible at delivering a compelling radio broadcast. Sports writers were extremely adept at
creating dramas through print, but they failed to capture audiences in the early days of radio.
Oddly enough, their sports knowledge undermined their sports coverage in the new medium.
Instead, a new role emerged: the sportscaster.
In the era of radio, the performance of live sports broadcasts came with significant stakes.
Adept sportscasters were cherished more for their voices than their sports knowledge. Delivering
play-by-play depictions of sporting events takes little technical knowledge, instead the
entertainment comes from the delivery. Mandelbaum writes of early radio sportscasters, "the
broadcasters were akin to poets and troubadours who preserved and handed down the great tales
oftheir cultures by committing them to memory and reciting them publicly" (2004, 80). Delivery
was actually so important that sometimes sportscasters such as Graham McNamee, known
especially for his baseball broadcasts, were not even present at the event but instead handed
written play-by-play depictions of the game so that they could add their own dramatic and
authorial tone to the live event (Mandelbaum 2004).
Another issue during the emergence of radio was redefining the role of newspaper sports
coverage. Radio could deliver the liveness desired by sports fans and was incredibly well suited
for play-by-play commentary. Newspapers had traditionally covered the blow-by-blow report of
an event, capturing the drama through flowery language and hyperbole. With radio, the
sportscaster captured the audience's attention through the same means, bringing in even more
emotion as his voice rose and fell with the action of the contest (Enriquez 2002, 202). Sports
writers instead decided to focus on an area that radio broadcasters could not: strategy. Early
sportscasters had to focus so much on the delivery of the action that they could not elaborate on
the reasons behind certain maneuvers. Sports writers took advantage of this deficiency and began
E-Sports Broadcasting 32
writing articles which focused on everything around the action. From in-depth analysis of
strategy to the creation of larger than life athlete personalities, newspaper coverage of sports in
the era of radio completely changed to remain relevant.
Sports magazines also had to find a new space to occupy during radio's reign.
Completely unable to keep up with the live coverage by radio and the strategic coverage of
America's favorite sport, baseball, sports magazines instead began to focus on niche sports such
as yacht racing. The other innovation of sports magazines in the early 1930s was their addition of
full page color photographs of athletes, something that neither radio nor newspapers could offer
(Enriquez 2002, 202). They remained as an important sports medium but had been supplanted by
both radio and newspapers. Baseball's hold on the American public was so strong that the niche
sports, which were typically covered in sports magazines, hardly seemed relevant. Football in
particular rarely saw coverage anywhere other than sports magazines (Bryant and Holt 2006, 32).
Football had traditionally been seen as a college sport reserved for the wealthy, but with an
increasing number of college graduates in the U.S. and the rise of a new medium, its niche status
was about to change (Oriard 2014, vii).
The Televisual Transformation of Sport
Television's initial debut into the sports world was a colossal failure. Reaching only a
few hundred people, the first American televisual sports broadcast was a Columbia-Princeton
baseball game on May 17, 1939. Just a few years after the commercial release of the television in
the U.S., RCA's first foray into televised sport flopped. The New York Times' Orrin E. Dunlap Jr.
recounted on the following Sunday, "The televiewer lacks freedom; seeing baseball on television
is too confining, for the novelty would not hold up for more than an hour if it were not for the
commentator" (Rader 1984, 17). He goes on to say, "To see the fresh green of the field as The
E-Sports Broadcasting 33
Mighty Casey advances to the bat, and the dust fly as he defiantly digs in, is a thrill to the eye
that cannot be electrified and flashed through space on a May day, no matter how clear the air."
Bryant, Holt, Enriquez, and Rader attribute the failure of early televisual sports to several
factors. First, television camera technology was rudimentary and receivers were even worse
(Bryant and Holt 2006, 31; Rader 1984, 18). Viewers could hardly see the player, much less
follow the ball or action on the field. Second, television was not a commercial success upon its
release. Sets were expensive and did not offer nearly enough programming to warrant their price:
an issue that created a sort of negative loop as the television industry needed more viewers to
warrant more content yet could not supply enough content to attract more viewers. The third
factor, described by Enriquez, is the failure for broadcasters to adapt to the new medium.
Sportscasters could not actually see the video feed and casted the game as if they were still on
radio; recounting every single action that occurred on the field despite what was on viewers'
screens at home. Inexperienced camera operators had difficulty following the action and the
image rarely matched what the sportscaster was describing.
Radio sportscasters also had difficulty transitioning into the new visual medium because
they could no longer provide the same level of drama through exaggeration and hyperbole.
Where short infield ground balls could previously be described as laser-fast bullets, the viewers
at home now saw that the play was just another ordinary event. Situated somewhere in between
watching the game live at a stadium yet still sounding like radio, televisual sport had a difficult
time defining itself in the late 1930s and early 1940s. According to Rader, televisual sport
experimentation stopped completely during the Second World War (1984, 23).
With the well-established roles of radio, newspapers, and sports magazines, the revival of
televisual sport seemed to be impossible. The utter failure of televised sports in the late 1930s
E-Sports Broadcasting 34
into the Second World War left televisual sport in a difficult position. Sports radio's popularity
was at an all-time high in the 1940s. Baseball had captured the hearts and minds of the American
people, and famous radio broadcasters such as Bill Stern and Jack Armstrong kept them listening
with bated breath (Rader 1984, 30-3 1).
Baseball and more generally live event sports spectatorship, however, could not keep the
nation content for too long. In what has been dubbed the Sports Slump of the 1950s by Rader
and others (Bryant and Holt 2006, McChesney 1989), spectatorship had finally started to
dwindle. Television sets were making their way into homes in record numbers after World War
11. In the post-World War 11 era, pastimes shifted from inner-city, public forms of recreation to
private, home-centered forms of recreation. Sports revenue was down and change was in the air.
People could watch baseball on their television sets at home, but not many people wanted
to. As shown by the earlier quote from The New York Times, television had difficulty containing
the magic that baseball once held. Football, however, was poised to rise with the new medium. It
had been long overlooked, but football was incredibly well suited for television broadcasts. The
large, visually distinct ball and typically slow moving action provided an acceptable subject for
contemporary television camera technology (Grano 2014, 13). College football had seen a bit of
success in newspapers, but professional football had a negative reputation as a "perversion ofthe
college game played for alma mater rather than a lousy paycheck" (Oriard 2014, vii). Radio
broadcasts of football had never reached the same level of success as baseball.
Professional football seemed to be a sport without a suitable medium. As sports media
scholar Michael Oriard explains, "[o]nly television could give the professional game a national
audience, and Pete Rozelle's defining act as the commissioner who ushered in the modem NFL
was to market the league through a single television contract, rather than leaving clubs to work
E-Sports Broadcasting 35
out their own deals" (2014, vii). This deal with broadcasting giant, NBC, led to the NFL's great
breakout story and what would soon become the model for televised sports (Rader 1984, 85).
With the NBC still losing money on a dwindling sports fanbase, they were ready to pull the plug
on their deal with the budding NFL until the championship match between the Baltimore Colts
and the New York Giants of 1958 (Grano 2014, 13). This match, still hailed as the 'Greatest
Game Ever Played', would become the longstanding origin story of televised football. The game
went into a second overtime, pushing the broadcast into prime time on the East Coast, a slot in
which NBC never dared to place professional football. As millions of Americans tuned in for
their regularly scheduled programming, they instead found John Unitas and his Baltimore Colts
scoring the game winning touchdown after a long, hard-fought battle. Oriard, Rader, Grano,
Oates, and Furness all trace the NFL's commercial success to this one defining moment.
As compelling as origin stories often are, the truth is that many other factors lead to the
success of football in the new mass medium. New technologies such as video tape were integral
to the rise of football in America. Hitchcock argues that instant replay in particular helped with
the rebranding of professional football: "The use of video-tape gave the game of football a whole
new image... The instant replay changed football from brutal, quick collisions into graceful
leaps, tumbles and falls. It gave football an aura of art in movement. It made football attractive to
entirely new segments of the audience" (1989, 2). Where football players had once been seen as
lethargic brutes, instant replay allowed broadcasters to slow down images, dissect plays, and
highlight the athleticism of players (Rader 1984, 83-84).
Sports, with football leading the charge, were once again on the cutting edge of media
adoption. According to Dylan Mulvin, the first documented use of instant replay for review and
training purposes was in 1957 during a game between the Los Angeles Rams and the San
E-Sports Broadcasting 36
Francisco 49ers (2014, 49). By 1964, instant replay was a standard broadcasting technique across
all sports. The NFL's willingness to adapt to the new medium set it apart from other sports at the
time.
In addition to these technological and legal advances, Bryant and Holt as well as
McChesney argue that one particularly innovative producer reinvented sports broadcasting for
television: Roone Arledge. With ABC's full support, Arledge established television broadcasting
conventions still present today. After the 1958 Championship game between the Colts and the
Giants, ABC was scrambling to catch up to the NBC's success in televised sports broadcasting.
As Enriquez describes, "Television broadcasting affected different sports in different ways. It
devastated boxing, had mixed effects on baseball, and proved a boon to college and professional
football" (2002, 202). As NBC began to ride the wave created by the NFL, ABC looked to get in
on the action.
Arledge was given free rein to perform a complete overhaul of ABC Sports. Bryant and
Holt argue that the single most important innovation Arledge brought was the notion that a
televisual broadcast should be presented "from the perspective of what the typical fan would see
if he or she attended the game live" (Bryant and Holt 2006, 33). Arledge (2003) believed that the
broadcast should capture the essence of attending a game, not just the play on the field, but the
roar of the crowd, the cheerleaders, the marching bands, and the coaches on the sidelines. As
Enriquez describes, "under Arledge, television assumed every role previously played by print
media; it served as the primary medium for experiencing events, it provided detailed analysis,
and it gave human faces to the participants" (2002, 205). Through football, televised sports were
able to set conventions which separated them from earlier forms of media. This transition lives
E-Sports Broadcasting 37
on in live-streaming today as we will see later with live-streaming's adaptation rather than
transformation of televised sport.
The arrival of television meant that sports radio and print media had to redefine their role
in sports coverage. Television could deliver the liveness of radio and, with the help of
commentators and technology like instant replay, the drama and dissection of strategy found in
print media. Newspaper coverage of sports was now relegated to simple recaps. Sports
magazines on the other hand rode the success of television. As Bryant and Holt assert, "Sports
Illustratedoffers a classic example of an old medium responding to a new one" (2006, 36).
Rather than seeking out an area left uncovered by television, Sports Illustratedsupported
televised sports by providing innovative action photography and updates on the most popular
athletes and teams at the time.
Sports broadcasts of the 1960s were infused with the hopes and fears of the Cold War
era. R. Powers, a television sports scholar, suggests that sports filled a void in the American
public, "shrugging off the darker morbidities of the Cold War and McCarthyism" (1984, 118).
The re-found focus on sports as spectacle established by "the youthful theme of ABC, echoed the
Kennedy idealism of the new frontier, the sporting emphasis echoed Kennedy's image of
muscular athleticism..." (Whannel 2002, 34). Entertainment sports media, with its art-in-motion
presentation, delivered a message of newness and regeneration to American.
Through broadcasting and advertising deals, sports helped build and perpetuate the
growing conspicuous consumption movement and the capitalist ideals of post-war America.
Athletes resumed their star status. Sports stars began appearing in advertising everywhere.
Merchandising became a key part of sports promotion. Anything from replica jerseys of sports
stars to blankets and flags with team branding can be found almost anywhere in the U.S.
E-Sports Broadcasting 38
Contemporary Sports fandom has come to mean much more than simply following a team. It
means buying a team's products, playing sports video games, joining fantasy leagues, and
watching sports entertainment television. Oates, a sports media scholar focused on the NFL,
writes that fandom has been transformed by the presentation of athletes as commodities to be
consumed selectively and self-consciously by sports fans (2014, 80). The previously subcultural
hyper-fandom activities such as fantasy football and sports video games, Oates argues, have
moved into mainstream prominence and profitability. Fans are invited to interact with athletes as
vicarious managers in fantasy sports, offering a completely new, personally tailored form of
interaction with sports organizations. This new drive for constant connection and feedback
within the sports industry culminates with live-streaming.
Live-Streaming: Constant Connection
As Oates suggests, sports fandom has fundamentally changed to reflect an increased
involvement on the part of the spectator. Athletes and personalities have become commodities
for fans to interact with. Social media, fantasy sports, and video games have created a connection
to sports stars that was never before available in other media. At any moment, a spectator can
catch highlights on ESPN, head over to forums to discuss major sporting events, or load a stream
of a match on their phone, all while tweeting at their favorite athletes with the expectation that
their words will be received on the other end.
Recent trends show a change in the sports media landscape as new platforms begin to vie
for control over sports broadcasting in the US. The NFL has recently signed a deal with Google
allowing for the streaming of games over the internet after their current contract with DirecTV
ends in 2015. This deal reflects the changing media landscape in the internet era. The rise of new
streaming platforms poses an interesting dilemma to the current media titans and new
E-Sports Broadcasting 39
opportunities for new forms of media sports. Thus far, using the tradition established by
McChesney, Bryant, Holt, and Rader among others, I have used sports media as a lens through
which to view particular socio-cultural moments in America. I now turn that lens towards the
contemporary sports media landscape. What can we learn about our own social moment by
looking at the use of streaming platforms for traditional sports or the arrival of e-sports as an
entirely new form of professional competition that makes use of older forms of media, but
thrives in live-streams and video on demand?
The MLB offers an early case study into the use of live-streaming for major league sports
broadcasting. The regular season in the MLB consists of 2,430 games, a staggering number
compared to the NFL's 256. The sheer number of regular season games held each year causes a
problem with over-saturation. This inundation of content lowers the value of each individual
game in the eyes of the major networks (Mondelo 2006, 283). The games that these networks
choose not to air due to scheduling conflicts previously caused many games to go unseen by fans
outside of the local media market for the two competing teams. To remedy the situation, the
MLB streamed over 1,000 regular season games online starting in 2003. The launch of MLB.tv
in 2002 allowed engaged MLB fans to continue watching content even when they did not have
access to the games through the major networks. While not initially a huge commercial success,
MLB.tv still runs today, over a decade later at a monthly subscription of $19.99 and as of 2014
incorporated both post-season games and the World Series as part of the package (MLB.tv
2015). While the MLB has not released the official revenue totals for its live-streaming service,
with 3.7 million subscribers the platform generates well over $400 million per year (MLB.tv
2013). This little-known use of live-streaming shows a hunger for immediate interaction with
sports media regardless of the available medium.
E-Sports Broadcasting 40
Early live-streaming fundamentally looks and feels like television, but it filled a role
which network television could not: all access and constant connection to media. It took form on
a new platform, but did not truly differ from television. Early live-streaming is more like an
adaptation of television than a new medium. Rather than creating something new, the early foray
into live-streaming by the MLB simply adapted the already present broadcasting infrastructure
and applied it through a different avenue. Television is often invoked in live-streaming. If we
look at MLB.tv, the .tv signifies its connection to television, but that domain is actually the
official domain for the country of Tuvalu. Other streaming platforms like ustream.tv, twitch.tv,
MLG.tv, all based outside of Tuvalu, use the same domain to signal their televisual connection.
Live-streaming emerged at a very particular moment in the evolution of sports media.
With air-time limited on the major networks, the internet allows a near infinite amount of content
to reach sports fans. As Oates would argue, from fantasy sports, to blogs, to live-streaming, the
internet is, for many, the new space of the sports fan. Live-streaming goes beyond the ability of
other media to reach viewers wherever and whenever, whether from a home computer or a
mobile device. Live-streaming delivers on the constant connectedness expected by consumers
today. At its roots, live-streaming is a televisual medium. So what separates it from television?
Live-streaming today has created its own niche by blending other forms of media. Most
live-streams host an internet relay chat (IRC) in addition to the audiovisual component of the
broadcast. This IRC allows viewers to chat with other audience members and often the
broadcaster, a functionality not currently available in television. This live audience connection in
live-streaming is unparalleled in television. Hamilton et al., in their investigation of the
significance of live-streaming for community creation, situate Twitch streams as an important
'third place' for community. Building on the work of both Oldenberg and McLuhan, Hamilton et
E-Sports Broadcasting 41
al. (2014) suggest that "By combining hot and cool media, streams enable the sharing of rich
ephemeral experiences in tandem with open participation through informal social interaction, the
ingredients for a third place." The third place that the authors point to creates a rich connection
akin to interpersonal interaction. The ephemeral nature of these interactions creates a deep sense
of community even in streams with hundreds of thousands of viewers. Live-streaming and in
turn, the IRC associated with streams creates a shared experience tantamount to the "roar of a
stadium" (Hamilton et al. 2014). These streams also pull in a global audience, connecting
isolated audiences into one hyper-connected community. Live-streaming draws on television for
its look and feel, but delivers not only on the desire for liveness perpetuated in sports media but
also the hyper-connectivity present in today's globalized world.
E-sports, Live-streaming, and Sports Media
Many factors contributed to the success of live-streaming for e-sports. It arrived at a
moment when television seemed closed to e-sports, it was much less expensive to produce, and
much easier to cultivate. Television broadcasts are prohibitively expensive to produce. Early
attempts at airing e-sports on television have typically flopped, rarely surviving past a second
season. E-sports are difficult to film when compared to traditional sports and conventions had
not yet been set for the televisual presentation of e-sports (Taylor 2012). The action in traditional
sports can typically be captured by one shot. E-sports broadcasts, in contrast, must synthesize
one cohesive narrative out many different player viewpoints with varying levels of information.
In a game like CounterStrike, broadcasters must wrangle with a large map with ten players in
first-person perspective. The resulting audiovisual feed is a frantic attempt to capture the most
relevant information from the players with an outside 'observer' controlling another viewpoint
E-Sports Broadcasting 42
removed from the players' point of view. The observer functionality in the early days of e-sports
broadcasting created a difficult barrier to overcome for commercial success on television.
Observer functionality had not yet become a focus for game developers and commentary had not
reached the level of competency it has in more contemporary broadcasts.
Instead of finding success on television, e-sports pulls in millions of concurrent viewers
on live-streaming sites such as Twitch.tv. With television seemingly out of reach and streaming
requiring significant investment per event in the early 2000's, e-sports broadcasting remained
relatively stagnant until the arrival of a reliable, and cheap, live-streaming platform. Justin.tv
(and other similar sites like UStream and Stickam), which launched in 2007, delivered exactly
what e-sports broadcasters needed to grow. The site allowed users to quickly and easily stream
content online with the use of some relatively simple software. Both broadband internet reach
and streaming technology had developed to a point that lowered the barrier of entry for
broadcasters. Players from around the world streamed games from their bedrooms. E-sports
broadcasters reached new, massive audiences.
The success of gaming content on Justin.tv spurred a new streaming site dedicated solely
to gaming. The games-centered streaming site, Twitch.tv, launched in 2011. Twitch.tv
revolutionized the e-sports industry. Each of the casters I interviewed spent time detailing the
importance of Twitch.tv without being prompted. As one explained, Twitch.tv is "the clearest
driving factor that's grown e-sports over the past 2-3 years." As mentioned in the introduction, e-
sports audiences have reached previously unheard of levels. Large scale e-sports events regularly
see concurrent viewer numbers in the hundreds of thousands. These broadcasts still largely
resemble televised sports however, rarely, if ever, making use of the IRC.
E-Sports Broadcasting
43
Live-streaming is just one of the forms of media the e-sports industry makes use of. In
fact, e-sports interacts with most media in the same ways that traditional sports have. The e-
sports industry pushes back into almost all of the earlier forms of media discussed in this chapter.
Print and radio typically fill a PR role in e-sports coverage. Large events or developments often
make their way into publications like The New York Times. Local radio segments will
occasionally feature summaries of e-sports events occurring nearby. Internet versions of both of
print and radio sports coverage are fundamental segments of the e-sports media ecosystem.
Podcasts, digital audio files available on the internet through downloads or streaming, vlogs, and
video diaries fill essentially the same role for e-sports that radio currently plays for traditional
sports. Experts weigh in on recent developments and players breakdown certain aspects of a
game.
E-sports journalism has also immerged as a legitimizing force within the industry. Sites
like ongamers.com and esportsheaven.com keep fans abreast of any new developments in the
professional scene for all of the major e-sports titles. Journalists like Richard Lewis add
legitimacy to e-sports through their coverage of current events. Their recaps of developments as
well as summaries of various tournaments and leagues closely resemble their print counterparts
in sports coverage. It is clear that the e-sports industry is in conversation with many forms of
media. Many of the forms and techniques are borrowed directly from sports coverage. These
forms of media did not appear instantly however, they are the result of years of push and pull
with the larger sports media landscape. Nowhere is this more apparent than in the commentating
of e-sports live-streams.
E-Sports Broadcasting
44
Chapter 2
Shoutcasters Collecting Conventions
E-sportscasters, often referred to as shoutcasters, both look and sound like professional
sportscasters. Their attire and cadence both create an instant connection to televisual sports.
Having never seen a game of Starcraft 2 before, you may watch the flashing lights and
explosions with a perplexed look on your face. As you continue to watch, you hear two
commentators provide a narrative, stats fly across the screen, and you start to piece together the
game in front of you. After a few minutes, you know the two players who are facing off against
one another, you feel the excitement as they engage each other's armies, and a slight sting as the
player you were rooting for concedes the match with a polite "GG." The whole presentation feels
like a variant of Monday Night Football with virtual armies instead of football teams. From the
stat-tickers to the sound of the commentator's voice, you can almost imagine the ESPN or CBS
logo gracing the bottom corner of the screen. Shoutcasters have become a staple in e-sports. One
of the main signifiers of the 'sports' moniker professional gaming has taken on, shoutcasters lend
an air of professionalism to a scene which often struggles to define itself. By adopting the 'sport'
title, a precedent has been set for e-sports broadcasters which informs their style and
conventions.
Shoutcasters are important to investigate because they form a fundamental grounding for
e-sports which helps it to create its identity in the face of blistering turnover rates and constant
field shifts. E-sports stand in a unique position compared to traditional sports. Where players and
coaches in traditional sports often have careers that last for several years, e-sports personalities
E-Sports Broadcasting 45
suffer from intense turnover rates where professional careers can end within a year. E-sports
players burn out quickly and coaches rarely make a lasting name in the industry. The
recognizable personalities in e-sports are the few innovators and commentators who turned their
passion into a career. In this chapter, I analyze the role of shoutcasters within the larger
framework of the e-sports industry. I build much of this analysis on the foundation that Taylor
(2012) established in her investigation of the rise of e-sports. Much of Taylor's analysis still
holds true today, but some other developments in the field have created new dynamics within
shoutcasting that were not present during her initial encounters with shoutcasters. Understanding
how shoutcasters borrow from earlier forms of media, the issues they perceive within the
industry, and how they cultivate their own identity as shoutcasters while grappling with the
hyper-connection found in live-streaming as a medium allows us to grasp the relationship e-
sports broadcasting has with earlier forms of media while still creating its own identity. I begin
with a very brief look at the history of shoutcasting.
Shoutcasting History
One can see that even early attempts at broadcasting competitive gaming borrowed
heavily from its media contemporaries. Starcade,a 1982 show that ran for two years, marks one
of the first forays into e-sports broadcasting. Though the term e-sports had not yet emerged, the
show featured two opponents attempting to outscore each other on various arcade machines. If
we look to Starcade as an early example of e-sports, then the origins of e-sports commentating
resemble game show commentary found in Jeapordy! or The Price is Right. Watching Starcade
for the hosting alone reveals many similarities to other game shows: the host wears typical game-
show host garb, pleasantly explains every aspect of the competition, and speaks with the
E-Sports Broadcasting 46
broadcast voice we all recognize. Starcadealso shows the constant evolution of competitive
gaming coverage as it continued to refine its camera angles, presentation, and format over its two
year run.
The model which more closely resembles our modern vision of shoutcasting gained
momentum at the turn of the twenty-first century. The title shoutcaster comes from the early
streaming software used for e-sports broadcasting, SHOUTcast. While many people familiar
with e-sports may have no idea where the term comes from, a prominent shoutcaster, djWHEAT
(2012), claims that the title remains due to its signaling of the history of e-sports. SHOUTcast, a
media streaming program, arrived in 1998, allowing interested parties to broadcast audio
recordings to various 'radio' channels for free. SHOUTcast allowed for video streaming, but as
one early shoutcaster I interviewed lamented, the bandwidth and equipment required for video
streaming was prohibitively expensive.
Instead of the audiovisual broadcast we regularly associate with e-sports live-streams
today, early shoutcasters relied on audio recordings akin to early radio coverage of traditional
sports. These early broadcasts only streamed audio to a few hundred dedicated fans on internet
radio. Early shoutcasts follow the form of traditional play-by-play radio broadcasts, focused
primarily on presenting every development in the game. In interviews, veteran shoutcasters were
not shy about admitting the influence radio sportscasters had on their own style. One mentioned
that he spent hours listening to live sports radio to hone his own skills.
Early shoutcasters also performed many aspects of the production that they are no longer
required to perform in the more mature e-sports industry. They would attend events, set up their
own station, typically with their own laptop and microphone. It was a very grassroots affair.
E-Sports Broadcasting 47
With little experience in the technical aspects of broadcasting, the productions emulated as much
as they could from sports broadcasting to lend an air of professionalism.
With the arrival of Twitch.tv, and other reliable streaming platforms, much of the onus of
production was taken off of shoutcasters. Instead of acting as producers, directors, editors, and
on-air talent all at once as they had in the early audio-only streams, shoutcasters are now more
able to focus on the portion of their work from which they get their name. Shoutcasting after the
early days of internet radio has come to not only sound like traditional sportscasting, but also
look like traditional sportscasting.
Something Borrowed: Influences from Sportscasting
Wardrobe
Many ofthe shoutcasters I interviewed talked about wardrobe as a huge change within
shoutcasting, one that was spurred entirely by looking at traditional sportscasting. Most
shoutcasters got their start wearing t-shirts and jeans at various e-sports events. Today, you will
rarely find a shoutcaster not wearing a shirt with a blazer. Looking at the image below shows the
incredible shift in shoutcasting just within the last six years. Both images feature the same
Figure 2-Left: Joe Miller at 2009 Intel Friday Game London; Right: Joe Miller at 2015 Intel Extreme Masters World Championship in Katowice Poland. Image credit: ESL, Philip Soedler and Helena Kristiansson. Flickr.com/eslphotos
E-Sports Broadcasting
48
shoutcaster: Joe Miller. The left-hand image comes from the 2009 Intel Friday Game London
while the right-hand image comes from the 2015 Intel Extreme Masters World Championship.
While the images are quite similar, the professionalism apparent in the right-hand image
resembles a professional sportscaster. The gamer/geek vibe found in the left-hand image has
been removed from the shoutcasting image. As a few of the shoutcasters I spoke with admitted,
the drive to rework the shoutcaster wardrobe came purely from traditional sports. On top of that,
they pointed to a desire to shed the gamer/geek stereotypes that e-sports had come to inhabit. By
adopting professional attire, they felt that they could get rid of the old image and emulate the
professionalism of a sports broadcast. Wardrobe is not the only aspect of traditional sportscasting
that has made its way into shoutcasting.
Style
One of the more elusive aspects borrowed from traditional sports is the actual
commentary style. I use the term elusive here to signal the difficulty in pinning down exactly
why shoutcasters remind us so vividly of traditional sportscasters. Early shoutcasters had no
models outside of traditional sportscasting so they took as much as they could: "So as a
broadcaster we look at traditional sportscasting. We pull from that and then make sure it fits in
game casting." As it turns out, many sports commentary conventions translate well into game
casting. As such, the first generation of casters share many similarities with television
sportscasters. Most of these early shoutcasters admit to being influenced almost entirely by
traditional sportscasters. One caster explains, "Television is where we grew up, it's what we
watched. So clearly that's where we're going to pull from."
E-Sports Broadcasting
49
Shoutcasters typically have no media training, instead relying on mimicry of earlier
conventions to get by. As with most positions in e-sports, and similar to early sports writers and
radio casters, shoutcasters are just passionate fans turned professional. In conversations, they
each revealed a bit of their own personal history that pushed them towards broadcasting, but only
one ever mentioned having received any sort of formal training. Years into his shoutcasting
career, he "went back and did a journalism and broadcasting course for 6-9 months." Of
particular note, he mentions, "they did one really good project which was 'how to be a news
presenter'. They taught me the basics of that." The rest, he says, he learned on-air through
experience. The other shoutcasters I interviewed echoed this story.
Most of the shoutcasters I interviewed fell into shoutcasting through happenstance and
had to learn their craft on-air. Shoutcasters are akin to the very early television sportscasters who
had to reinvent their style during broadcasts like Bob Stanton, a radio sportscaster turned
television sportscaster who would send his friends to sports bars to gather feedback and
suggestions from audience members (Rader 1984). Echoing this inexperience and improvisation,
one shoutcaster I interviewed confided, "the first time I had ever been on camera, I sat down and
I was like, 'I have no idea how to do this.' I had done two and a half years of audio casting, but I
had never done video." Another caster recalls of his first show, "All I knew going into my first
broadcast was that I know this game. I know how it works, I know these players, and I play
against these kinds of players. I don't know how commentary works, but I can do this." After
these first, trial broadcasts, both of the above-mentioned shoutcasters admitted to going back and
watching traditional sportscasters to learn more about their craft.
Other broadcasting style conventions such as how to handle dead-air, how to end a
segment, or how to transition into gameplay were lifted directly from sportscasting. Paul
E-Sports Broadcasting
50
"ReDeYe" Chaloner, a prominent personality within the e-sports industry, addresses each of
these techniques in his primer on becoming a professional shoutcaster, constantly pointing to various examples from traditional sports broadcasting to illustrate his points. In his section on dead-air, Chaloner writes, "[o]ne of the best pieces of advice I had for TV was from legendary
sports producer Mike Burks (11 time Emmy award winner for sports production) who told me 'A
great commentator knows when to shut up and say nothing"' (2009, 9). Chaloner uses traditional
sports broadcasting as a way to explain shoutcasting, a clear indication of its influence on e-
sports broadcasting.
Content Analysis: Play-by-play and Color Commentary in the NFL andLCS
Another convention lifted directly from traditional sports broadcasts is the arrangement
of the casting team. Traditional television sportscasters fall into one of two roles: play-by-play or
color commentary. Shoutcasters use these same two roles. Both sports broadcasts and e-sports
broadcasts feature one of each type. The play-by-play commentator narrates the action, putting
together the complicated and unconnected segments of the game into a cohesive narrative. The
color commentator provides their in-depth analysis of the game, typically from the stance of a professional player.
Shoutcasters have adopted the two-person team directly from traditional sports
broadcasts. The path to each role follows the same pattern as well. An ex-professional player
almost always fills the role of color commentary in both traditional sports and e-sports. Their
insight is unparalleled. Color commentators attempt to breakdown complex series of events or
highly technical maneuvers as if they were still a professional player. In the words of one e-
sports color commentator, "I'm not pretending to be a professional player, but I'm doing my best
E-Sports Broadcasting
51
to emulate them." He goes on to say, "You can read up on it and study it as much as you like, but unless you've lived it, you can't really comment on it." In comparison, a play-by-play
commentator does not need to have the technical depth, but relies more on presentation. Even
though a play-by-play commentator has most likely played hundreds of hours of whichever game
they cast, they cannot fill the role of the color commentator. This dynamic allows for play-by-
play commentators to switch games with relative ease whereas color commentators, both in
traditional sports and e-sports, are locked into one game.
To illustrate the emulation of sports broadcasting found in e-sports, I now turn to a brief
content analysis of the commentary found in a regular season NFL game and a regular season
League of Legends Championship Series game. I start with the commentary from one play in an
NFL game. After presenting the traditional model, I move to the commentary from one team
fight in League of Legends to demonstrate how the convention has been adapted for e-sports
commentary. In both cases, I have removed the names of players, commentators, and teams to
cut down on jargon and clutter. Each case exhibits the dynamic present in the two man
commentary team.
NFL
With both teams lined up, the play begins and the play-by-play commentator comes in immediately.
Play-by-play: Here's [player 1] out to midfield, a yard shy of a first down. [player 2] on the tackle.
After the play has ended, the color commentator takes over.
Color: It's been [team 1] on both sides of the ball. Whether it be defense and the way that they dominated this ball game and then offensively, the early going had
the interception, didn't get much going over the next couple of possessions offensively but since that time, [player 3] has been very precise in how he has thrown the football and they just attacked this defense every which way.
E-Sports Broadcasting
52
LCS
Three members ofthe Red Team engage Blue Team atRed Team's turret
Play-by-play: This is going to be dangerous. Doing what he can to hold out. They're going to grab the turret, the fight will continue after the shield onto [player 1] is already broken. He gets hit, the ignite is completely killing the
ultimate! He gets hit by [player 2] who turns around again and heads back to [player 3].
With the action overfor the moment, the colorcommentatorbegins to speak
Color: I thought he finished a camp here too...
The color commentatoris cut off as two more members ofBlue Team attempt to attack.
Play-by-Play Heyo, as the top side comes in here too. [player 1], will he hit a good ultimate!? Oh! They were staring right at him but now he's just left to get shredded apart here. They couldn't have thought that this was going to go well for them.
With thefightconcluded, thecolorcommentatorcontinuesagain.
Color: Is this just the week of chaos? Because that was a really really uncharacteristic lapse in judgement from [Blue Team]: Not calling everybody into
position at the right time, and [Red Team] with the advantage make them pay for it. They didn't expect the ignite from Nautilus. I think they expected Nautilus to
have exhaust instead, but [player 1] pops the ignite, and as we said there is no armor so [player 2] just... and it continues!
The color commentator is cut off once again as the two teams engage one another for a third time.
If we look at these examples for their content rather than the specific moment in the game we can
catch a full illustration of the two-caster dynamic. As we can see by the NFL example, the play-
by-play commentator provides a running narration of the action in the game. When the action
ends, the color commentator provides the meta-level analysis of the unfolding events. In the LCS
example, we see that the same dynamic is present, however, due to the continuous action in the
game, the transition into color commentary becomes difficult. In the first lull, the LCS color
E-Sports Broadcasting
53
commentator tries to insert his analysis, but he is cut off by a second engagement. The color
commentator stops talking immediately and allows the play-by-play commentator to continue
describing the action. After the engagement ends, we hear the color commentator pick up again, explaining why the fight developed the way it did as well as his insight into why the teams played the way they did.
Entertainment and Narrative
Entertainment value was a repeated concept in my interviews with shoutcasters. Some
went so far as to claim that their role was only to entertain. One stated, "I want to get you
excited. I want to get you to watch the game as if it was a show on television." Many would
point to good sportscasters as an example to follow. If we recall the example of the early days of
radio sportscasting, casters had a difficult time making the transition to the new medium. Their
broadcasts felt flat when compared with their print counterparts (Bryant and Holt 2006, 27).
Early sportscasters got locked into the idea that their responsibility was to provide the basic play-
by-play depiction of a match. The golden age of sports radio was brought in by popular
sportscasters, such as Graham McNamee, who were so popular that they'd be asked to cast
games remotely. McNamee, like a live version of his print counterparts, was famous for creating
florid depictions of the game, athletes became heroes and their play became combat as told by
McNamee. While the presentation of live and accurate information was still essential, popular
radio sportscasters shifted sports media from news reports to entertainment. Sportscasters are
responsible for this shift. Without their expert embellishment, play-by-play depictions lack
entertainment value.
E-Sports Broadcasting
54
Even non-sports fans can feel the excitement from a particularly good sportscaster. The
game they portray is far more intriguing than any actual events happening on the field (Bryant,
Brown, Comisky, and Zillmann 1982). This disconnect forms one of the primary reasons that the
transition to casting televised sport was so difficult. The small liberties that sportscasters took
were no longer acceptable in the visual medium. Once the home viewer could see the game,
commentary had to shift to accommodate more scrutiny. Radio sportscasters were notorious for
their embellishment. As Bryant, Comisky, and Zillman note from one of their several
investigations of sportscasting, roughly forty percent of commentary is dramatic embellishment
(1977). In 1977, the authors tracked the amount of hyperbole and exaggeration in sports
broadcasting and found that over half of the speech was dedicated to drama. E-sports
shoutcasters, by comparison, rarely use dramatic embellishment of action. A few of the
informants noted that they feel that embellishing actions is not possible due to their audience.
The e-sports audience as pictured by shoutcasters, includes mostly dedicated players.
While many sports fans may play their sport casually, e-sports fans engage with the games they
watch regularly. As one shoutcaster explains, "we've only ever gone out to a hardcore audience."
He acknowledges that the current audience is in flux, but the primary base of e-sports fans are
intensely dedicated viewers and players. Because of this dynamic, shoutcasters feel that
embellishment of the actions on screen would be difficult to slip past a discerning eye. Their
belief that dramatic embellishment isn't possible may say more about their understanding of
traditional sports fans than it does about their formulation of their role as commentators. While
unacknowledged in interviews, the possibility for shoutcasters to add embellishment exists. Their
choice not to use embellishment speaks more to their formulation of the e-sports audience than it
E-Sports Broadcasting
55
does to their casting quality. Instead of embellishment of action, shoutscasters rely on another
convention found in traditional sportscasting: narrative.
Studies that focus on the media effects of sportscasting suggest that sportscasters
fundamentally alter the audience perception of the telecast through story-telling and narrative
(Krein and Martin 2006). Sportscasters take many liberties in their descriptions of the game to
add a dramatic flair. In several empirical studies, Bryant, Brown, Comisky, and Zillman (1979)
found that when sportscasters created a narrative of animosity between players, viewers felt an
increased amount of tension and engagement. They conclude that the narrative scope of the
sportscaster is critical in the perception of sports broadcasting. This narrative creation has bled
into shoutcasting as many shoutcasters attempt to amplify the emotional content of their games
by highlighting underdog stories or hyping up animosity between players. One caster I
interviewed connected his work to the narrative creation in sports commentary by stating,
"Emotion is one of the key words in commentary. You need to be able to connect a certain
emotion to the words you're saying. You need to be able to make someone scared for their
favorite player or overjoyed when they win. Create greatest enemies. You need to be able to
make these feelings through what you say or how you say it. Emotion is everything." This caster
goes to great lengths to dig up statistics from previous matchups to provide a narrative for the
match he casts. Through this investigation, the shoutcaster is able to contextualize a match with a
rich history. Perhaps two players have met three times before and each time the result has been
the same. Will viewers be able to share in the momentous victory of the underdog? As part of
their preparation, shoutcasters will research all of the previous meetings between two players to
create a history between them, a tactic which they acknowledge has been used in traditional
sports for decades.
E-Sports Broadcasting
56
Production
Stream production is another realm where e-sports have started to borrow heavily. While
e-sports producers may have gotten a head start on streaming live events, they often rely on the
expertise of television producers to put a show together. Multiple shoutcasters pointed to a
steady influx of television producers making their way into e-sports, "the way we approach a
production is very much like television. A lot of the production guys that are getting into it are
from television." In fact, the executive producer of the League of Legends Championship Series, an immensely popular e-sports program, is former emmy-winner Ariel Horn. Horn won his Emmy as an associate producer of the 2004 Olympics for NBC. Likewise, Mike Burks, executive producer for the Championship Gaming Series mentioned in the above quote from Paul Chaloner, had an immense amount of experience in televised sports before migrating to e- sports. These are just two of the many experienced television producers making their way into e- sports. Their style is beginning to show as e-sports events become more polished every year. If we recall the image of Prime Time League in the introduction to this thesis, we can see the influx of television conventions in e-sports from the production side. The shoutcasters benefit from the experience of working with television producers to refine their style. As the field has grown, however, we begin to see minor tweaks in style and delivery. Spending a significant time with e- sports casting, in comparison with sportscasting, reveals several distinctions. Much of this difference comes with the age of the field, but just as Starcadeevolved over its short lifespan, shoutcasters have found ways to make themselves unique. Their understanding of their role within the overall e-sports industry informs us of some of the key differences here.
E-Sports Broadcasting
57
Something New: Shoutcaster Identity
Shoutcasters are situated somewhere between fan and professional. As evidenced by the
above investigation of how shoutcasters are informed by their traditional predecessors, the role
of shoutcasters is still very much in flux. Shoutcasters are just recently creating their own
identity separate from their sportscasting roots. In particular, the less experienced shoutcasters I
spoke with use markedly different models to inform their own casting.
The Second Generation of Professional Shoutcasters
A second generation of casters is just now coming into the scene. Instead of looking to
traditional sportscasters as their models, they emulate veteran shoutcasters: "my influences are
the streamers that I watched. I watched everyone who casts and commentates...my commentary
style comes from those guys. I don't know how much is conscious or just mimicry." This new
caster has been on the scene for only a fraction of the time that the veterans have. In that time he
has honed his shoutcasting skills not by finding sports commentary and seeing which aspects
apply to shoutcasting, but by absorbing as much information as he could from other shoutcasters.
Another fresh shoutcaster offers a fascinating disconnect from the older casters: "I definitely
bounce off more e-sportscasters than sports. I just watch more e-sports than sports. Sports are so
different than e-sports, there's so little that I can actually use from them." Where his
predecessors admit to borrowing primarily from traditional sportscasters, this new generation has
left the realm of traditional sportscasting behind.
The professional casters provide material for an amateur level of shoutcasters to pull
from. The shoutcasters I interviewed were all professionals who typically work on major events
with massive support and budgets. With a robust network of shoutcasters to pull from, however,
E-Sports Broadcasting
58
we may see much more support for the grassroots level of e-sports that many early fans are
accustomed to. Current shoutcasters also provide a model for potential careers. Through the
hard-fought struggle of years-worth of unpaid events, the shoutcasters I spoke with have created
a legitimate profession worth pursuing. Most warned me that the path is no longer as easy as they
once had it. Most of them pursued shoutcasting for the love of e-sports. They had years to
fumble through persona creation, broadcast techniques, and conventions.
New, potential shoutcasters are automatically held to a higher standard. A senior caster
offered the following advice, "With how casting has changed, you need to be open to casting
multiple games. You have to be willing to learn. There is a lot we can teach a caster, but you
have to have some skills within you alone. You have to have some camera presence." The
mention of camera presence signals a significant jump from early shoutcasting. Just a few years
ago, the shoutcasters I interviewed sat down in front of a camera for the first time armed with
nothing but game knowledge; camera presence was a foreign word to them.
Perhaps the most significant change to casters is their overall level of experience. Some
of the shoutcasters I spoke with have been broadcasting for over a decade. Time has allowed
these casters to experiment and find their own style. As mentioned earlier, many of the minutia
involved in running a show take time to learn. Most casters got their start casually. They may
have been passionate about e-sports and created a role for themselves within the industry. Some
are former players who made the hard decision to give up on their hopes of winning big to
instead cultivate a community.
As new professionals, shoutcasters are just now coming together with the support of e-
sports companies under legitimate full-time contracts. The professional casters I spoke with all
acknowledged a significant change in their commentary since making the transition into full-time
E-Sports Broadcasting
59
casting with other casters around for feedback and training. One explained that he had never
been sure how to handle dead-air, moments when both casters are silent and there is little action
in the game. Through feedback sessions with other casters, he learned that there are some
appropriate times to let the viewer formulate their own opinions on the match. Heeding the
advice of veteran casters like Paul Chaloner, he went on to explain that one of the problems he
sees in shoutcasting more generally is that shoutcasters are afraid to just be quiet during a stream.
Part of the emotional build-up of a game, he explains, is letting the natural flow of a game take
its course without any input from the casters.
It will be fascinating to watch as these expert networks inform e-sports broadcasts across
the world. One informant remarked, "Now that we're all working together, we're learning a lot
off of one another, which hasn't happened in commentary before." Beyond allowing veteran
shoutcasters to compare notes, the professional status of shoutcasting provides training to new
shoutcasters. One veteran claimed, "All the junior people are learning so much faster than we
ever did. They're taking everything we learned over 5-10 years and doing it in months." These
veteran casters can now pass on their experience and their style. Techniques like hand-offs at the
end of a segment or transitions from the desk to gameplay often came up in my interviews as
issues which take years to learn, but newer shoutcasters are able to pick these cues up from
earlier shoutcasters instead of taking what they can from a sports show and hoping that
everything translates well.
Beyond the expected roles that shoutcasters fill, they also perform many secondary tasks
which don't typically fall to traditional sportscasters. In the very early days of live-streaming, shoutcasters were often responsible for every aspect of the broadcast from set-up to teardown. Some shoutcasters still regularly assist on production aspects of the broadcast such as graphics
E-Sports Broadcasting
60
packages, camera set-up, and audio checks, but others leave the production aspects of the stream
to more experienced hands while focusing instead on updating websites, answering tweets, creating content, or streaming their own play sessionss. No two casters seem to fill exactly the same role within the broadcast team. They do, however, share some similarities which seem to form the shoutcaster identity.
Record-keepers and Community Managers
All of the casters pointed to stats-tracking as part of their roles outside of their air-time
responsibilities. Most of them keep highly detailed databases full of every possible stat they can
get a hold of from game clients and public databases. These stats can be as simple as wins and
losses from remote regions or LAN tournaments that do not post their results online. The stats
can also get as minute as the number of units a particular Starcraft 2 player built in one particular
match. When the data isn't readily available, shoutcasters go out of their way to curate the
database themselves. While some keep their database secret to provide a personal flair to their
casting, others find it important to share this information with their e-sports communities. One
shoutcaster recalled his surprise when he first worked with a major South Korean e-sports
company with its own dedicated stats team. He expressed that he had never realized how much
he needed a dedicated stats team like you find in traditional sports until that moment. It was then
that he realized how much of his daily routine stats curation filled. While he was grateful for the
help, he also felt personally responsible for stats collection and did not entirely trust the figures
from the professional statisticians. This example shows the difficult position e-sports fills,
constantly stuck between borrowing from traditional sports while not fully able to cope with the
maturity of the sports media industry.
E-Sports Broadcasting
61
Another role which tends to fill a shoutcaster's daily routine is community maintenance.
Whether the caster creates their own content on gaming sites, responds to fans on social media,
or spends their time streaming and interacting with the community, they all mentioned some
form of community maintenance as part of their duties as a shoutcaster. This particular focus on
community maintenance most likely results from the grassroots origins of shoutcasters. These
casters were a part of an e-sports community long before they became shoutcasters. Whether
they view it as their professional responsibility or a social responsibility remains unclear. They
all admit to some level of e-sports advocacy, however. They view PR, and the proliferation of e-
sports as part of their responsibilities. The most effective way to tackle this issue, many of them
have decided, is through community engagement. The community aspect of shoutcasting identity
leads me to a discussion of the affordances of the hyper-connectivity in live-streaming.
Grappling with the Hyper-Connectivity in Live-streaming and E-sports
Shoutcaster Connection
I have yet to meet anyone in the e-sports industry who has not remarked on the unique
level of connection present in e-sports. Shoutcasters especially, tap into the network created in
these online communities. In a representative summary of my conversations, one shoutcaster
explained, "the connectedness is so unique in e-sports. The way that we can interact with fans
instantly. The players at the end of the day are gamers, they know exactly where to look.
They've got Twitter, they go on Facebook, they post on Reddit." Audience members connect
ephemerally in the IRC of a Twitch stream, but they constantly scour the social media outlets of
their favorite stars, e-sports companies, and shoutcasters, creating a deeply connected
community. Professional shoutcasters understand that the e-sports communities operate in a
E-Sports Broadcasting
62
unique way when compared to traditional sports fandom. E-sports fans have an odd connection
to franchises or teams within their chosen e-sport. As mentioned before, turnover rates and
general industry growth force entire communities to radically reform from one season to another.
Where traditional sports fans often follow a team based on geographic loyalty, or
familial connections, e-sports fans do not have that option. While you will often hear of fans
cheering for teams in their geographic region (North America, Europe, South-East Asia, etc) if
they make it to the last few rounds of an international tournament, they may also base their
fandom off of a team logo, or a particular player instead. Shoutcasters recognize this dynamic
and use it to cultivate the community.
Communication, they claim, separates them from traditional sports broadcasts or even
news anchors: "We communicate more with our audience than you'll see TV news anchors or
celebrities, but it's part of our job to get more information out there." The focus on
communication seems to be unique to shoutcasters as the majority of it happens outside of their
broadcasts. While many shoutcasters define their role on-screen as an educator of sorts, the
notion of spreading information about e-sports falls outside of their screen time. This double role
of broadcaster and community manager extends what media scholars have dubbed the
broadcasting persona beyond the point typically associated with sportscasters or news anchors.
Shoutcasters and Persona
Horton and Wohl (1956), two social scientists who study mass media, make the assertion
that mass media performers make a conscious decision to create and maintain parasocial
interactions through the creation of a persona. Social scientists have coined the term parasocial
interaction for the intangible connection which most of us feel to some form of media or another.
E-Sports Broadcasting 63
Standing in contrast to interpersonal interaction, a person to person exchange between two real
and cognizant human beings, parasocial interaction is instead a unidirectional relationship
(Miller and Steinberg 1970). The feeling of connection we create with fictional characters, news
anchors, or sports stars does not fall within the definition of an interpersonal interaction. Whether
mediated through a screen or the pages of a book, a parasocial interaction does not manifest in an
exchange of thoughts or words between individuals. Rather, it is embodied and lived through one
individual. Schiappa et al. (2007) conducted a meta-analysis of parasocial interaction literature to
better understand how broadcasters 'hook' viewers to a certain show. They concluded that
parasocial interactions can create and prolong connection to television programming. While
Schiappa et al. concede that there are a few opportunities for a parasocial interaction to result in
interpersonal relationships in the physical world, the compelling issue is the establishment of
intimacy mediated through means well outside of a person to person context.
Horton and Wohl set out with the goal of creating a term for the relationship between
performers and their audience in mass media. The authors suggest that the emergence of mass
media created an illusion of connection to performers which was previously unavailable. They
argue that the connection people feel to mass media stars is analogous to primary social
engagement. If this type of engagement takes place in radio and television, where users have no
opportunity to interact with audience members who are not co-present, it follows that the
interaction between broadcasters, their audience, and one another in a Twitch stream is a
particularly deep connection even beyond the level noticed by Horton and Wohl.
Shoutcasters create a familiar face and personality for audience members to connect with.
Mark Levy (1979), another proponent of parasocial interaction who focused his work on news
anchors, suggests that both news anchors and sportscasters help to create and maintain
E-Sports Broadcasting 64
communities through regular scheduling, conversational tones, and the creation of a broadcasting
persona. Shoutcasters perform this same role to even greater effect due to the constant changes
surrounding the e-sports industry. The regularity and consistency of shoutcasters' broadcasts
helps to foster a feeling of genuine connectedness within the community.
Although difficult to quantify, many conversations with shoutcasters turned to the odd
feeling of connection that e-sports fans feel towards one another. One shoutcaster attempted to
explain this connection by stating, "[w]henever I go to an event, I realize that fans are just
friends I haven't met yet." I found this statement to be particularly poignant. It hints to the sort of
intangible connection e-sports industry personalities and fans feel to one another through live-
streams. Anecdotally, this air of friendship permeated e-sports events that I have attended and
went well beyond what I have felt at traditional sporting events or concerts.
Previously, persona creation and maintenance occurred on-screen or at events only.
Social media has forced many media personalities to extend their personas beyond the long-held
notions of broadcaster-fan interaction. In many ways, shoutcasters must go beyond even these
extended boundaries into a near constant persona maintenance because of their roles in live-
streaming and community maintenance. Many shoutcasters give up their personal, off-air time to
stream their own gameplay or to create video content which necessarily prolongs the amount of
time they embody their broadcast persona.
I found that shoutcasters create a variation on the broadcast persona. Rather than a full-
blown broadcasting personality which they inhabit while on-air, most shoutcasters have found
that between community management, social media interactions, and broadcasts, they almost
never get an opportunity to step out of their role as a shoutcaster. Due to this near constant
connection, most shoutcasters acknowledge that they act differently on air, but they tend to
E-Sports Broadcasting 65
simply invoke a more upbeat and charismatic version of themselves. Echoed in each of the
interviews, the casters point to the idea of excitement, "you have to get excited for the person out
there watching." Even if they are not in the mood to shoutcast, or they have had a bad day,
shoutcasters must leave their personal issues out of the broadcast. This aspect of the
shoutcaster's personality comes out in all of their interactions on social media as well.
Most of the shoutcasters I interviewed situated their role in e-sports as somewhere
between Public Relations, Marketing, and Community Management. One of the casters
explained the importance of invoking the broadcast persona when speaking about sponsor
expectations: "We're working in an industry with companies behind us, we can't always say
exactly what we want to say." Shoutcasters' acknowledgement of their involvement in securing
sponsorships signals an interesting shift in the e-sports industry: the focus of the broadcast team
on potential revenue generation. I turn now to an analysis of the revenue streams found in both
traditional sports and e-sports broadcasting.
E-Sports Broadcasting
66
Chapter 3
Revenue
Funding Professional Play
After situating e-sports broadcasting within the greater sports media landscape,
particularly in conventions, casting, and use of medium, it is important to analyze the portions of
sports media production that have made their way into e-sports broadcasting. If we acknowledge
the influence that traditional sports broadcasting has had on e-sports broadcasting in the realms
of conventions and casting, we must also understand the importance of this relationship at the
production and economic levels. In this chapter I discuss how the history and development of the
sports media industrial complex in the U.S. has bled into the economics of the e-sports industry.
In particular, I focus on how sports media models inform the e-sports industry while portions of
the sports industry's revenue streams remain out of reach for e-sports broadcasters. Despite the
reshuffling of the sports media industrial complex mentioned in the introduction to this thesis,
traditional sports broadcasting still relies on the same revenue streams that it had in the past.
Traditional sports producers have fully capitalized on the commodification of their content. E-
sports producers, in contrast, are still shaping their revenue streams within live-streaming. The
commercialization found in the sports media industrial complex has taken hold of the e-sports
industry in several notable ways. Following in the example set by Stein's thesis work, it is not
enough to just acknowledge the relationship between e-sports and traditional sports media, we
must also understand the path which brought e-sports broadcasting to its current state.
Using ONLY the context block/prompt to guide your answer, provide a comprehensive comparison of the subjects mentioned in the question. Do not use any previous knowledge or outside sources to inform your answer.
How e-sports broadcasts compare with traditional sports broadcasts? |
Only use the information shared in the context to answer the questions.
Do not rely on external sources or your inherent knowledge to answer the question.
If a meaningful answer cannot be generated from the context, acknowledge that and ask for a related document or offer to help answer something else; do not hallucinate. | Explain all the eligibility processes from the context in detail. | Direct Certification
Direct certification is a process through which state agencies and school districts automatically
certify children for free meals based on documentation of the child’s status in a program or
category without the need for a household application.74 States are required to conduct direct
certification with SNAP and have the option of conducting direct certification with the other
programs and categories that convey categorical eligibility.
For SNAP and other federal programs, the direct certification process typically involves state
agencies (e.g., state SNAP and state educational agencies) cross-checking program rolls.75 A list
of matched children is sent to the school district, which certifies children for free meals without
the need for a household application.76 For foster, homeless, migrant, and runaway children,
direct certification typically involves school district communication with a local or state official
who can provide documentation of the child’s status in one of these categories.77
The 2004 child nutrition reauthorization act (P.L. 108-265) required states to conduct direct
certification with SNAP, with nationwide implementation taking effect in school year 2008-2009.
As of school year 2018-2019 (the most recent data available), USDA reported that 98% of
children in SNAP households were directly certified for free school meals.78
The HHFKA made further policy changes to expand direct certification. One of those changes
was the initiation of a demonstration project to test direct certification with Medicaid (see the text
box below). The law also funded performance incentive grants for high-performing states and
authorized corrective action plans for low-performing states in direct certification activities.79
Direct Certification with Medicaid Demonstration
The HHFKA initiated a demonstration project to conduct direct certification of children individually participating
in Medicaid and children in Medicaid households. Unlike the other programs used to directly certify children for
school meals, Medicaid does not convey categorical eligibility for free school meals, but rather identifies children in
households that would meet the income eligibility thresholds for either free or reduced-price school meals.80
Following the demonstration authority in the HHFKA as well as pilot authority in the Richard B. Russell National
School Lunch Act, some states are currently directly certifying children based on Medicaid data.81 As of school
year 2023-2024, there were 38 states operating direct certification with Medicaid. Two states used Medicaid to
directly certify children for free meals only (130% of the poverty level or below).82 Thirty-six states were
operating under an expanded direct certification demonstration project to test direct certification with Medicaid
for free and reduced-price meals (up to 185% of the poverty level).83
Verification of Eligibility
Each fall, districts are required to verify a sample of approved household applications on file,
with a focus on applications close to the eligibility threshold (“error-prone” applications).86
School districts may also conduct verification of questionable applications. Verification is not
required for children who are directly certified for free or reduced-price meals. (Note that districts
participating in Provisions 1, 2, and 3 must meet verification requirements for the years in which
they administer household applications.)
Many districts employ direct verification (matching data from other low-income programs) to
conduct their verification activities, but if data cannot be verified in this way, schools must
contact households to verify the information provided on the application. A child’s eligibility
status may stay the same or change (e.g., from free meals to reduced-price meals or loss of
eligibility) as a result of verification of household income, or if the household does not respond to
verification outreach (in which case eligibility would be lost, though that decision can be
appealed).
Reimbursement
School food authorities must keep track of the daily number of meals they serve in each category
(free, reduced-price, and paid) that meet federal nutrition requirements. School food authorities
then submit claims for reimbursement to the state agency, which submits the claims to FNS.
Approved reimbursements are distributed to school food authorities by the state agency, usually
on a monthly basis. Per statute, reimbursement rates are adjusted for inflation annually.87 Table 4
shows NSLP and SBP reimbursement rates for school year 2023-2024. (Note that school food
authorities also receive a per-lunch commodity reimbursement, discussed previously under
“Commodity Assistance”.)
The law provides a higher reimbursement rate for meals meeting certain criteria. For example,
school food authorities that are compliant with the updated federal nutrition standards for school
meals receive an additional 8 cents per lunch.
88 School food authorities also receive an additional
2 cents per lunch if they serve 60% or more of their lunches at a free or reduced price. For
breakfasts, school food authorities receive higher reimbursements if they serve 40% or more
lunches at a free or reduced price (referred to as severe need schools).
Once school food authorities receive the cash reimbursements, they can use the funds to support
almost any aspect of the school food service operation. However, federal cash reimbursements
must go into a nonprofit school food service account that is subject to federal regulations.89
Payments for non-program foods (e.g., vending machine sales) must also accrue to the nonprofit
school food service account.90
FNS periodically studies the costs of producing a reimbursable meal. In April 2019, FNS released
a School Nutrition and Meal Cost Study, which found that the average reported cost of producing
a reimbursable lunch was $3.81 in school year 2014-2015 (reported costs were defined as those
charged to the school food service account).91 This exceeded the average federal cash
reimbursement ($3.32) for lunches in school year 2014-2015. When unreported costs were
included (costs outside of the food service account; for example, labor costs associated with
processing applications), the cost of producing the average reimbursable lunch was $6.02. As
noted previously, children’s payments and state and local funds may also cover meal costs. | #SYSTEM INSTRUCTIONS
Only use the information shared in the context to answer the questions.
Do not rely on external sources or your inherent knowledge to answer the question.
If a meaningful answer cannot be generated from the context, acknowledge that and ask for a related document or offer to help answer something else; do not hallucinate.
#CONTEXT
Direct Certification
Direct certification is a process through which state agencies and school districts automatically
certify children for free meals based on documentation of the child’s status in a program or
category without the need for a household application.74 States are required to conduct direct
certification with SNAP and have the option of conducting direct certification with the other
programs and categories that convey categorical eligibility.
For SNAP and other federal programs, the direct certification process typically involves state
agencies (e.g., state SNAP and state educational agencies) cross-checking program rolls.75 A list
of matched children is sent to the school district, which certifies children for free meals without
the need for a household application.76 For foster, homeless, migrant, and runaway children,
direct certification typically involves school district communication with a local or state official
who can provide documentation of the child’s status in one of these categories.77
The 2004 child nutrition reauthorization act (P.L. 108-265) required states to conduct direct
certification with SNAP, with nationwide implementation taking effect in school year 2008-2009.
As of school year 2018-2019 (the most recent data available), USDA reported that 98% of
children in SNAP households were directly certified for free school meals.78
The HHFKA made further policy changes to expand direct certification. One of those changes
was the initiation of a demonstration project to test direct certification with Medicaid (see the text
box below). The law also funded performance incentive grants for high-performing states and
authorized corrective action plans for low-performing states in direct certification activities.79
Direct Certification with Medicaid Demonstration
The HHFKA initiated a demonstration project to conduct direct certification of children individually participating
in Medicaid and children in Medicaid households. Unlike the other programs used to directly certify children for
school meals, Medicaid does not convey categorical eligibility for free school meals, but rather identifies children in
households that would meet the income eligibility thresholds for either free or reduced-price school meals.80
Following the demonstration authority in the HHFKA as well as pilot authority in the Richard B. Russell National
School Lunch Act, some states are currently directly certifying children based on Medicaid data.81 As of school
year 2023-2024, there were 38 states operating direct certification with Medicaid. Two states used Medicaid to
directly certify children for free meals only (130% of the poverty level or below).82 Thirty-six states were
operating under an expanded direct certification demonstration project to test direct certification with Medicaid
for free and reduced-price meals (up to 185% of the poverty level).83
Verification of Eligibility
Each fall, districts are required to verify a sample of approved household applications on file,
with a focus on applications close to the eligibility threshold (“error-prone” applications).86
School districts may also conduct verification of questionable applications. Verification is not
required for children who are directly certified for free or reduced-price meals. (Note that districts
participating in Provisions 1, 2, and 3 must meet verification requirements for the years in which
they administer household applications.)
Many districts employ direct verification (matching data from other low-income programs) to
conduct their verification activities, but if data cannot be verified in this way, schools must
contact households to verify the information provided on the application. A child’s eligibility
status may stay the same or change (e.g., from free meals to reduced-price meals or loss of
eligibility) as a result of verification of household income, or if the household does not respond to
verification outreach (in which case eligibility would be lost, though that decision can be
appealed).
Reimbursement
School food authorities must keep track of the daily number of meals they serve in each category
(free, reduced-price, and paid) that meet federal nutrition requirements. School food authorities
then submit claims for reimbursement to the state agency, which submits the claims to FNS.
Approved reimbursements are distributed to school food authorities by the state agency, usually
on a monthly basis. Per statute, reimbursement rates are adjusted for inflation annually.87 Table 4
shows NSLP and SBP reimbursement rates for school year 2023-2024. (Note that school food
authorities also receive a per-lunch commodity reimbursement, discussed previously under
“Commodity Assistance”.)
The law provides a higher reimbursement rate for meals meeting certain criteria. For example,
school food authorities that are compliant with the updated federal nutrition standards for school
meals receive an additional 8 cents per lunch.
88 School food authorities also receive an additional
2 cents per lunch if they serve 60% or more of their lunches at a free or reduced price. For
breakfasts, school food authorities receive higher reimbursements if they serve 40% or more
lunches at a free or reduced price (referred to as severe need schools).
Once school food authorities receive the cash reimbursements, they can use the funds to support
almost any aspect of the school food service operation. However, federal cash reimbursements
must go into a nonprofit school food service account that is subject to federal regulations.89
Payments for non-program foods (e.g., vending machine sales) must also accrue to the nonprofit
school food service account.90
FNS periodically studies the costs of producing a reimbursable meal. In April 2019, FNS released
a School Nutrition and Meal Cost Study, which found that the average reported cost of producing
a reimbursable lunch was $3.81 in school year 2014-2015 (reported costs were defined as those
charged to the school food service account).91 This exceeded the average federal cash
reimbursement ($3.32) for lunches in school year 2014-2015. When unreported costs were
included (costs outside of the food service account; for example, labor costs associated with
processing applications), the cost of producing the average reimbursable lunch was $6.02. As
noted previously, children’s payments and state and local funds may also cover meal costs.
#QUESTION
Explain all the eligibility processes from the context in detail. |
Write the answer in one paragraph, using full sentences. Use only the document provided. Use language that is easy to understand. | What are the negative and positive aspects of virtual teaching for the instructors? | See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/377123871
More I-talk in student teachers’ written reflections indicates higher stress
during VR teaching
Article in Computers & Education · April 2024
DOI: 10.1016/j.compedu.2024.104987
CITATIONS
0
READS
70
4 authors:
Andrea Westphal
University of Greifswald
53 PUBLICATIONS 305 CITATIONS
SEE PROFILE
Eric Richter
Universität Potsdam
45 PUBLICATIONS 495 CITATIONS
SEE PROFILE
Rebecca Lazarides
Universität Potsdam
145 PUBLICATIONS 1,980 CITATIONS
SEE PROFILE
Yizhen Huang
Universität Potsdam
19 PUBLICATIONS 208 CITATIONS
SEE PROFILE
All content following this page was uploaded by Eric Richter on 12 January 2024.
The user has requested enhancement of the downloaded file.
Computers & Education 212 (2024) 104987
Available online 3 January 2024
0360-1315/© 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC license
(http://creativecommons.org/licenses/by-nc/4.0/).
More I-talk in student teachers’ written reflections indicates
higher stress during VR teaching
Andrea Westphal a,*
, Eric Richter b
, Rebecca Lazarides b
, Yizhen Huang b
a University of Greifswald, Department of Education, Steinbeckerstr. 15, 17487, Greifswald, Germany b University of Potsdam, Department of Education, Karl-Liebknecht-Straße 24-25, 14476, Potsdam, Germany
ARTICLE INFO
Keywords:
Augmented and virtual reality
Improving classroom teaching
Teacher professional development
ABSTRACT
Video-based reflection on one’s own teaching represents a crucial tool in teacher education. When
student teachers reflect on negative classroom events, it elicits “self-focused attention,” which has
been associated with more intense negative emotionality. Self-focused attention can be quantitatively captured using first-person singular pronouns (“I,” “me,” “my”) in written reflections by,
for instance, student teachers. What is unclear is whether student teachers’ use of these firstperson singular pronouns in their written reflections is linked to and predicts their negative affective experiences during teaching. For the present study, a fully immersive virtual reality (VR)
classroom was implemented in which student teachers taught a lesson, provided written reflections on their teaching, and then taught a second lesson. We measured N = 59 student
teachers’ self-reported stress and heartrate responses while teaching in the VR classroom and
determined the percentage of first-person singular pronouns in their written reflections. Firstly,
our results showed that the use of first-person singular pronouns provides incremental information on manual ratings of student teachers’ foci in their written reflections. Secondly, student
teachers’ heartrates during instruction—a measure of physiological stress—were associated with
the use of first-person singular pronouns in subsequent written reflections. Thirdly, the use of
first-person singular pronouns predicted the increase in physiological stress from the first to the
second round of VR teaching. We discuss implications for automated feedback and for designing
reflective tasks.
1. Introduction
Teaching is often seen as a challenging profession (Chang, 2009; Westphal et al., 2022; . The transition to school practice, at least
when it takes place in real school classrooms, is especially demanding for student teachers1 (Goddard et al., 2006; Hultell et al., 2013;
Voss & Kunter, 2020). Fully immersive virtual reality (VR) classrooms provide a safe environment for student teachers to gain
hands-on teaching experience (Gold & Windscheid, 2020 ; Lin, 2023; Lugrin et al., 2016; Pendergast et al., 2022; Remacle et al., 2023;
Richter et al., 2022; Seufert et al., 2022). Ke and Xu (2020) suggested that active learning processes (“diving in”) and reflective
* Corresponding author.
E-mail addresses: [email protected] (A. Westphal), [email protected] (E. Richter), rebecca.lazarides@uni-potsdam.
de (R. Lazarides), [email protected] (Y. Huang). 1 In our study, the term “student teachers” refers to those prospective teachers who have not yet begun the supervised teaching portion of their
teacher education program.
Contents lists available at ScienceDirect
Computers & Education
journal homepage: www.elsevier.com/locate/compedu
https://doi.org/10.1016/j.compedu.2024.104987
Received 21 November 2022; Received in revised form 11 November 2023; Accepted 2 January 2024
Computers & Education 212 (2024) 104987
2
learning processes (“stepping out”) can be easily combined in VR classrooms. Thus, it may be useful to practice reflecting on classroom
situations from a more distanced perspective using VR classrooms early on in teacher education to help student teachers make a
smooth transition into the classroom. Because the setting is standardized and similar for all student teachers, VR classrooms allow us to
more accurately research the contributing factors and consequences of student teachers’ self-reflection; the highly controllable setting
increases the internal validity of research findings (Huang et al., 2021; Richter et al., 2022). This is of particular interest for research
looking at the interrelationships between student teachers’ affects in relation to their classroom experiences and their reflections upon
those experiences. VR classrooms allow us, for instance, to apply a finding from experimental psychology to the field of teacher education. Research in the fields of experimental psychology and clinical psychology has shown that self-focus after negative events is
accompanied by greater negative affect (e.g., Mor & Winquist, 2002). This finding has been explained by dysfunctional
emotion-regulation strategies (e.g., Nook et al., 2017). Applied to the context of teacher education, these findings may imply that
student teachers who are less skilled at transitioning between a more immersed perspective and a more distanced perspective on
challenging classroom events may experience more negative affect. The degree of a person’s self-focus can be determined based on the
frequency with which they use first-person singular pronouns (e.g., “I,” “me,” “my”); this has also been called “I-talk” (Tackman et al.,
2019). It would be highly relevant to examine this link between self-focused attention in written reflections—as reflected by more
frequent I-talk—and negative affect during instruction in the field of teacher education, because knowledge about the link between the
use of first-person singular pronouns and negative affect could be implemented in automated feedback systems to help identify student
teachers at risk of intense stress and burnout, allowing them to be offered personalized feedback and support. To date, however, these
questions have not been explored.
To examine whether student teachers’ use of first-person singular pronouns is an indicator of the stress level they experience in the
classroom, and whether it can predict changes in a student teachers’ stress, a highly standardized teaching situation is preferable,
because it ensures that all student teachers under study are reflecting on the same learning environment and on similar classroom
events at all measurement occasions. A VR classroom creates just such a setting. In the present study, class size in the VR classroom was
manipulated to induce higher levels of stress (large-class-size condition with a higher number of student avatars) and lower levels of
stress (small-class-size condition with a lower number of student avatars). Initially, we checked whether the use of first-person pronouns in written reflections provides incremental information about subsequent manual ratings of student-teachers’ foci (whether on
their own actions vs. student actions vs. the learning environment) in these reflections. In addition, we examine whether student
teachers who experience more self-reported stress and physiological stress (as measured by heartrate response) while teaching in a VR
classroom use more first-person singular pronouns in written reflections on their teaching. We also explore whether student teachers’
use of first-person singular pronouns in written reflections predicts the increase in stress in a subsequent VR lesson.
1.1. Reflection in teacher education: Technological advances
“Reflection” has been defined as “deliberate, purposeful, metacognitive thinking and/or action” (Koˇsir et al., 2015, p. 113) that is
believed to enhance instructional quality (Chernikova et al., 2020) by improving student teachers’ “noticing” and “knowledge-based
reasoning” (Stürmer et al., 2013; van Es & Sherin, 2008). Noticing is understood as a teachers’ ability to focus their attention on
relevant classroom events (van Es & Sherin, 2002). Knowledge-based reasoning is defined as teachers’ ability to apply their professional knowledge in order to interpret these classroom events (Borko, 2004; van Es & Sherin, 2002). In teacher education programs,
reflection is implemented in different ways, with student teachers reflecting on classroom videos of other teachers or on video recordings of their own teaching (Kleinknecht & Schneider, 2013). Technological advances have made it possible to design VR classrooms that student teachers experience as realistic and authentic classroom settings (Huang et al., 2021; Wiepke et al., 2019, allowing
them to practice and subsequently reflect on video recordings of their own teaching. Beyond the practical advantages for student
teacher education (e.g., the approval of students, parents, educators, and administrators is not required), an important benefit of VR
classrooms is the standardized classroom setting in which all student teachers experience similar classroom events for which they can
be prepared in advance. This enables teacher educators to more easily provide relevant professional knowledge tailored to managing
critical classroom events in the VR classroom; when reflecting on video recordings of their own teaching in the VR setting, student
teachers may thus be more able to apply relevant professional knowledge.
Video-based reflection generally involves written or oral reflections after viewing classroom videos; it is recommended that these
reflections follow a three-step process (e.g., Prilop et al., 2021). Firstly, student teachers are instructed to describe the relevant
classroom events; secondly, they are asked to evaluate and interpret these events; and, thirdly, they are required to identify alternative
classroom behaviors (e.g., Prilop et al., 2021). When describing relevant classroom events, student teachers may focus either on their
own actions, students’ actions, or on the learning environment as a whole (Kleinknecht & Schneider, 2013; Lohse-Bossenz et al., 2019).
Previous research has been resoundingly positive on the benefits of video-based reflection for student teachers’ professional vision (e.
g., Stürmer et al., 2013; Weber et al., 2020). Very few studies shed light on the affective experiences associated with video-based
reflection (Kleinknecht, 2021). Although Kleinknecht and Schneider (2013) suggested that reflecting on other teachers’ videos can
induce more negative affect than reflecting on one’s own teaching videos, several studies have shown that reflecting on one’s own
videos elicits intense emotional involvement (Borko et al., 2008; Seidel et al., 2011; Zhang et al., 2011). What is unclear, however, is to
what extent student teachers’ attentional focus (i.e., focusing on own thoughts, actions, or emotions) in the video-based reflection of
their own videos relates to their experiences of negative affect or stress.
A. Westphal et al.
Computers & Education 212 (2024) 104987
3
1.2. “I-talk” as a linguistic marker of self-focused attention and negative affect
In the fields of experimental psychology and clinical psychology, there is increasing empirical evidence indicating that there is a
relationship between self-focused attention and greater negative affect (e.g., Mor & Winquist, 2002; Nook et al., 2017). For instance,
studies using correlational designs showed that self-report measures of self-focus—such as the Public and Private Self-Consciousness
Scale (Fenigstein et al., 1975), but also sentence completion tasks (Exner, 1973; Wegner & Giuliano, 1980)—are associated with
self-report measures of state or trait negative affect in non-clinical and clinical samples (for an overview, see the meta-analysis by Mor
& Winquist, 2002). Other studies showed that self-referential language (so-called “I-talk”)—as a linguistic marker of an individual’s
self-focus—is associated with more intense negative emotionality in non-clinical samples (Kern et al., 2014; Mehl et al., 2006; Yarkoni,
2010; Yee et al., 2011; see also meta-analysis by Edwards & Holtzman, 2017) and is a marker of depression (Dunnack & Park, 2009;
Rude et al., 2004; Zimmermann et al., 2016). An extensive multi-lab multi-language study with data from more than 4700 participants
recently confirmed that the use of first-person singular pronouns is linked to more intense negative emotionality and depression
(Tackman et al., 2019). The association between I-talk and depression has been explained to some extent by negative emotionality
(Tackman et al., 2019). This indicates that “I-talk” might reflect a broader dispositional tendency towards feelings of distress, but “[t]
his possibility is a topic of ongoing research” (Berry-Blunt et al., 2021, p. 5).
Although frequent I-talk is seen as maladaptive, it may be a way of processing negative affect (e.g., after receiving deprecatory
information about oneself), as a recent literature review concluded (Berry-Blunt et al., 2021). This would suggest that greater negative
affect provokes I-talk. Experimental research did not find any evidence that inducing negative affect by showing participants negative
pictures led to more I-talk (e.g., Bernard, Baddeley, Rodriguez, & Burke, 2016). In line with Bernard, Baddeley, Rodriguez, and Burke
(2016) suggestion, it may be the case that negative affect only leads to more I-talk when elicited by self-deprecating information. This
proposition is consistent with research on the negative affective experiences of teasing and ostracism, which do lead to more frequent
I-talk (Klauke et al., 2020; Kowalski, 2000).
The reverse may also be true, however. Taking a more distanced perspective—as indicated by less I-talk—may over time change an
individual’s dispositional negative emotionality (Berry-Blunt et al., 2021). Distancing, i.e., shifting one’s perspective to be more
“distant” from or less immersed in a negative event, is an adaptive emotion-regulation strategy that is characterized by a lower level of
self-focused attention and can help reduce negative affect (Kross & Ayduk, 2008). The suggestion that repeated distancing may reduce
an individual’s tendency to experience negative affect (Berry-Blunt et al., 2021) would explain why I-talk predicts future depressive
symptoms in patients (Dunnack & Park, 2009; Zimmermann et al., 2016). Even in the short term, less I-talk may lead to less negative
affect: Experimental research confirmed that distancing—i.e., using no I-talk when talking about one’s own emotions while preparing a
stressful speech—can lessen negative affect after having given the speech (Kross et al., 2014, Study 3). Thus, the causal direction of the
link between I-talk and negative emotionality is less clear, but there may be a bidirectional relationship.
1.3. Gender differences in the association between self-focus and negative affect
Based on meta-analytical findings that women exhibit a greater tendency to ruminatively self-focus than men when experiencing
depression (Johnson & Whisman, 2013), it has been suggested that the positive association between negative emotionality and the use
of first-person singular pronouns is larger for women than it is for men (e.g., Tackman et al., 2019). When examining these gender
differences, Tackman et al. (2019) underlined that the use of first-person singular pronouns seems to be driven by low-arousal negative
distress in women. In contrast, it appears that men’s use of first-person singular pronouns is driven by high-arousal negative distress
(Tackman et al., 2019). Thus, the use of first-person singular pronouns may be an indicator of different affective experiences in women
and men (Fast & Funder, 2010) and, thus, gender should be taken into account when studying the relationship between self-focus and
negative affect.
1.4. Distinguishing different forms of self-focus in written reflections
Different strands of research differ in their operationalization of self-focus. In research on reflection in teacher training, manual
ratings indicate the extent to which student teachers focus on their own actions and thoughts, on students’ actions, or on the learning
environment (e.g., Kleinknecht & Schneider, 2013). In experimental psychology, self-focus is often operationalized via the use of
first-person singular pronouns (Berry-Blunt et al., 2021; Mor & Winquist, 2002). This strand of research has argued that individuals can
take a more or less immersed or distanced perspective on their own actions and thoughts by using more or fewer first-person singular
pronouns (Kross & Ayduk, 2008; Kross et al., 2014). When reflecting on their own teaching, some student teachers may distance
themselves from the experience and therefore rarely use first-person singular pronouns, while others may immerse themselves in the
experience and use first-person singular pronouns more frequently. As such, both indicators of self-focus (qualitative ratings vs.
first-person singular pronouns) should provide incremental information about student teachers’ self-focus in their written reflections.
For student teachers taking a more immersive perspective on their own teaching, objective first-person singular pronouns (i.e.,
“me,” “myself”) may reflect a more dysfunctional form of self-focus than subjective first-person singular pronouns (i.e., “I”) (Zimmermann et al., 2016). While subjective pronouns reflect an “active or self-as-actor form of self-focus,” objective pronouns reflect a
“passive or self-as-target form of self-focus” (James, 1890; Tackman et al., 2019, p. 819) that may indicate an even more detrimental
style of processing self-relevant information (Zimmermann et al., 2016; see also more and less dysfunctional questions in Ehring,
2020). This poses the question of whether the relationship between the use of first-person pronouns and negative affect is driven
mainly by objective first-person pronouns (Zimmermann et al., 2016). As such, “it is important to evaluate whether and how the
A. Westphal et al.
Computers & Education 212 (2024) 104987
4
association between depression and I-talk varies as a function of first-person singular pronoun type” (Tackman et al., 2019, p. 819).
Previous evidence on whether subjective and objective first-person pronouns differentially relate to negative affect is mixed (there is
support in the study by Zimmermann et al., 2016; but no or inconsistent differences in the studies by Dunnack & Park, 2009; Tackman
et al., 2019). But the distinction between subjective and objective first-person singular pronouns appears to be essential in our study, in
which the relationship between the use of first-person pronouns and negative affect is examined for the first time in the context of
teacher training.
1.5. Present study
Negative emotionality may be critical for student teachers when they reflect on their own teaching, yet research has rarely
concentrated on the role of negative emotionality for reflective processes (Kleinknecht, 2021). Meanwhile, research in the field of
experimental psychology indicates that negative affect may provoke self-focused attention as indicated by the use of first-person
singular pronouns (Berry-Blunt et al., 2021). Moreover, self-focused attention may also increase the tendency towards experiencing
negative affect (Berry-Blunt et al., 2021). What we do not know is whether this applies to the context of teacher education where
student teachers reflect on their own teaching. Studying this link between self-focus and negative affect in the context of student
teachers’ written reflections can provide valuable cues for diagnostic tools, automated feedback systems, and the improvement of
student teachers’ professional self-regulation. We used a VR classroom setting, and thus a highly standardized teaching situation, to
ensure that student teachers were reflecting on similar classroom events as we examined the following research questions:
(1) Do student teachers who exhibit a greater focus on their own actions (instead of students’ actions or the classroom environment)
use more subjective and objective first-person singular pronouns in written reflections on their teaching?
Here, the assumption could be that those student teachers who use subjective and objective first-person singular pronouns more
frequently also tend to focus on themselves rather than on the students in class or on the classroom environment. However, there is no
previous research combining manual ratings of student teachers’ foci (whether on their own actions vs. student actions vs. the learning
environment) with student teachers’ subjective and objective use of first-person singular pronouns when reflecting on their own
teaching. Thus, it is unclear to what extent manual ratings of student teachers’ foci correspond to the subjective and objective use of
first-person pronouns in written reflections on their teaching. We seek to address this research gap as an exploratory question.
(2) Do student teachers who experience more self-reported stress and physiological stress (as measured by heartrate response)
while teaching in a VR classroom use more subjective and objective first-person singular pronouns in written reflections of their
teaching?
Building on empirical evidence showing that negative affect may provoke self-focused attention, we analyze whether student
teachers who experience higher levels of negative affect when teaching in the VR classroom more frequently use subjective and
objective first-person singular pronouns in their written reflections. The class size in the VR classroom, i.e., the number of student
avatars, was manipulated to evoke higher levels (large-class-size condition) and lower levels of negative affect (small-class-size
condition(Huang et al., 2022). We hypothesize that student teachers who experience more negative affect—operationalized via
self-reported and physiological stress—when teaching in the VR classroom for the first time will use more subjective and objective
first-person singular pronouns in their written reflections.
(3) Does student teachers’ use of subjective and objective first-person singular pronouns in written reflections of their teaching
predict their increase in stress in a subsequent VR lesson?
We postulate that student teachers who use more subjective and objective first-person singular pronouns in their written reflections
will experience a greater increase in self-reported and physiological stress in the second VR classroom (as compared to the stress levels
experienced when teaching in the VR classroom for the first time).
2. Material and methods
2.1. Sample and procedure
Participants were N = 65 student teachers enrolled at the University of (anonymized for review) in Germany. Four of the
participating students did not hand in their written reflection and two additional students did not participate in the second VR practice
session, limiting our analyses to n = 59 students. Student teachers were on average 24 years old (SD = 4.57) and 49% identified as
female, 51% as male, none as diverse. Most students were third-year bachelor students (58%; second-year: 27%; fourth-year: 14%).
These student teachers participated in a weekly seminar on classroom management. The seminar included two 10-min practice sessions in a VR classroom that took place two weeks apart. Both VR practice sessions followed a standardized procedure. Participants
were first given a brief standardized audio introduction on how to interact with the VR environment. Participants were then given a
brief lecture about an a priori determined topic in the VR classroom to deliver to avatar students. During the first VR teaching
experience, participants taught about the US electoral system and the 2020 US election. For the second VR teaching experience,
A. Westphal et al.
Computers & Education 212 (2024) 104987
5
participants taught about sustainability. All the instructional materials that participants needed to accomplish the teaching task were
prepared and provided by the course instructor one week before the teaching exercise. During their VR teaching session, participants
were exposed to various on-task and off-task behaviors from the avatar students, such as asking topic-related questions, chatting, or
throwing paper balls. All avatar student actions were prescribed and therefore the same for all participants in both the class with 10
student avatars and the class with 30 student avatars. Immediately after their teaching experience in the VR classroom, the student
teachers reported on their stress levels in an online questionnaire. In addition, the student teachers handed in a written reflection on
their VR teaching session in the week following the first VR practice session.2 Prior to writing their reflections, the student teachers
received guidance on the three-step reflection process. In this process, they were instructed to describe three relevant classroom
situations (Step 1), evaluate and interpret these situations based on their professional knowledge (Step 2), and outline alternatives for
classroom situations that they evaluated negatively (Step 3). Before writing their reflections, student teachers were given time to
repeatedly watch the video of the VR classroom situation in which they had taught.
2.2. Design of the VR classroom
The VR classroom was designed to resemble an upper secondary school classroom in Germany (e.g., (Wiepke et al., 2021)). It was
set up with five rows and three columns of school desks and chairs with avatar students (see Appendix). Avatar students’ names were
displayed on name tags placed on their desks and had a wide range of physical characteristics, such as skin tone, hairstyle, and
clothing.
Student teachers were randomly assigned to teach a class of either 10 or 30 student avatars when teaching the VR classroom for the
first time and again when teaching it for the second time. The avatar students engaged in a range of behaviors that included both ontask and off-task actions. These actions ranged from constructive activities, such as writing in a notebook, to less productive actions,
such as throwing a paper ball. The avatar students maintained a natural seated posture and a neutral demeanour, occasionally
redirecting their attention by shifting their gaze or adjusting their body orientation in response to the participants’ movements. The
selection of off-task behaviors was based on a compilation of common disruptive behaviors documented in the academic literature
(Borko, 2016; Wolff et al., 2016). All parameters governing the avatar students’ behaviors, including initiation time, duration, spatial
location, and behavior type, were carefully scripted to maintain uniformity across experimental conditions. This ensured that the
behaviors enacted by the avatar students remained consistent between the scenarios with 10 avatar students and those with 30 avatar
students in the classroom.
We were using the HTC VIVE headset which has a resolution of 1080 x 1200 pixels per eye with a 108◦ field of view and a refresh
rate of 90 Hz. The headset was connected to a laptop (Alienware) with a 2.2-GHz Intel Core i7-8750H processor, with 16 GB of RAM,
and a NVIDIA GeForce RTX 2060 with 6 GB of VRAM graphic card, where the VR classroom software was operated. Essentially,
participants could move around in reality while experiencing multisensory feedback in the VR classroom. Previous studies have
confirmed that the technical setup in our VR classroom created an immersive soundscape which student teachers experienced as
realistic and authentic (Wiepke et al., 2019, 2021).
Student teachers were equally distributed across the four conditions (small class size VR1, small class size VR2: 21%; small class size
VR1, large class size VR2: 23%; large class size VR1, small class size VR2: 25%; large class size VR1, large class size VR2: 28%).3 Thus,
some teachers taught under the same conditions twice, while other teachers taught one VR session in the small class size condition and
the other VR session in the large class size condition. Student teachers teaching in the VR classroom were instructed to teach their
lesson as they would in a real classroom i.e., walking around the room and using similar observational and nonverbal behavior when
interacting with the avatar students as they would with non-virtual students.
2.3. Measures
2.3.1. Self-reported stress in the VR classroom
We measured the stress that student teachers experienced during the VR scenario using two items (“How tense did you feel in the
VR classroom?” and “How did you feel emotionally during the VR classroom?”; e.g., Delaney & Brodie, 2000). Items were answered on
a 9-point Likert-type scale ranging from 1 (calm, relaxed, composed) to 9 (tense). The internal consistency was good (αT1 = 0.83; αT2 =
0.85).
2.3.2. Physiological indicator of stress in the VR classroom
We operationalized student teachers’ physiological stress reactions based on their heartrate (beats per minute, BPM). Student
teachers’ BPM was measured using an armband optical HR sensor (Polar OH1) at 0.3s intervals when teaching in the VR classroom.
Prior to starting the VR scenarios, each student teachers’ baseline heartrate was measured at 0.3s intervals while student teachers were
asked to sit quietly and stay still. These baseline measures were used to control for individual differences in cardiovascular activity. We
then aggregated both heartrate measurements during the baseline phase and heartrate measurements during the VR teaching. The
differences between student teachers’ heartrate during teaching and student teachers’ baseline heartrate was used as a physiological
2 Students also reflected on the second VR classroom three months afterwards based on a video recording. Given the time span between this VR
classroom and the written reflection, we didn’t include these written reflections in our analyses. 3 Percentages do not total 100 because of rounding.
A. Westphal et al.
Computers & Education 212 (2024) 104987
6
indicator of student teachers’ stress response.
2.3.3. Self-referential language
We used the R package stringr (Wickham, 2019) to estimate the frequency of subjective and objective first-person singular pronouns. For each written reflection, we computed the percentage of subjective first-person singular pronouns (German: “ich”; English:
“I”) and the percentage of objective first-person singular pronouns (German: “mich,” “mir”; English: “me,” “myself”).4
2.3.4. Manual ratings of focus and depth in written reflections
Two independent raters manually rated the focus (as either student teachers’ own actions, student avatars’ actions, learning
environment, or no focus) and depth (description, evaluating/explaining, or reflecting on alternatives) in student teachers’ written
reflections using the software MAXQDA and following the procedure of Lohse-Bossenz et al. (2019; Kleinknecht & Groschner, ¨ 2016).
Interrater agreement was good (κ = 0.75). For our analyses, we used the coverage percentage, i.e., the number of characters in the
coded segment in relation to the total number of characters in the text.
2.4. Statistical analyses
To test the hypotheses, we conducted regression analyses using the R package MplusAutomation (Hallquist & Wiley, 2018) and
MLR as the estimation method. A manipulation check on our data was conducted by Huang et al. (2022) who probed whether student
teachers experienced more self-reported and physiological stress when teaching a VR classroom in the large-size condition than in the
small-size condition. We initially examined whether student teachers who used more subjective and objective first-person singular
pronouns focused more frequently on themselves rather than on the avatar students in class or on the classroom environment (RQ1). To
do so, we distinguished between student teachers who used subjective first-person singular pronouns with low frequency (i.e., ≤0.5 SD
below M), average frequency (≥0.5 SD below M and ≤0.5 SD above M), and high frequency (≥0.5 SD above M). A multivariate analysis
of variance was conducted with the subjective use of first-person singular pronouns (low, average, or high) as the between-person
factor and the relative frequency of the student teachers’ reflection focus (the student teacher, student avatars, or learning environment) as the dependent variable. The analysis was then repeated for the objective use of first-person singular pronouns. To examine our
second research question, we then regressed subjective and objective first-person singular pronoun use in the written reflections of the
first VR classroom on self-reported stress in these VR sessions (RQ2). We controlled for gender.5 Within this model, we allowed the use
of subjective and objective first-person singular pronouns to correlate. We specified an analogous model with physiological stress as
the predictor. In a second step, we regressed self-reported stress during the second VR classroom teaching session on the subjective and
objective use of first-person singular pronouns while controlling for self-reported stress during the first VR classroom teaching session
(RQ3). Gender and class size in the second VR classroom teaching session were used as covariates. Thus, regression coefficients of
pronoun use and gender indicate whether these variables explain changes in self-reported stress in the second, as compared to the first,
VR classroom teaching session. Analyses were conducted separately for the use of subjective and objective first-person singular
pronouns to probe whether subjective and objective first-person pronouns differentially relate to negative affect (instead of examining
incremental effects). Analogous models were computed for physiological stress. Due to technical difficulties when transferring the
heartrate data onto the storage device, heartrate while teaching in the VR classroom was only available for 40 students. There were
only 29 students for whom heartrate responses were present in both VR classrooms. We examined whether values were missing
completely at random using the MCAR-Test (Little, 1988) implemented in the R package naniar (Tierney et al., 2021). The test yielded
non-significant results (χ2 = 17.5, df = 25, p = 0.864) indicating that the values were missing completely at random. We applied the
full-information maximum-likelihood approach (FIML; Enders, 2001) to obtain appropriate estimates and standard errors.
3. Results
A manipulation check on our data Huang et al., 2022)showed that student teachers experienced more self-reported and physiological stress when teaching a VR class in the large-size condition than in the small-size condition. Descriptive results (Table 1) and a
dependent t-test showed that self-reported stress was greater in the first VR session than in the second VR session, t(54) = 4.89, p <
0.001. In contrast, physiological stress did not differ statistically significantly between the two VR sessions, t(28) = 0.68, p = 0.503.
3.1. Subjective and objective first-person singular pronouns and manual ratings of focus in written reflections
Pertaining to our first research question, we checked whether student teachers with a more frequent use of subjective and objective
first-person singular pronouns focused more often on themselves rather than on the avatar students in class or on the classroom
environment (Fig. 1a and b). A multivariate analysis of variance with the subjective use of first-person singular pronouns (low,
average, or high) as a between-person factor and relative frequency of student teachers’ focus of reflection on their own actions (as
opposed to focus on student avatars or the learning environment) as a dependent variable showed no statistically significant effect, F(6,
4 The objective first-person singular pronoun “meiner” is rarely used and was not considered here—following the suggestion by Tackman et al.
(2019)—because it overlaps with the more frequently used possessive pronoun “meiner”. 5 We did not look for interaction effects between the use of first-person singular pronouns and gender as our sample was small.
A. Westphal et al.
Computers & Education 212 (2024) 104987
7
108) = 0.59, p = 0.74. Similarly, we found no statistically significant difference between students with low, average, or high frequency
of the objective use of first-person singular pronouns on the relative frequency of student teachers’ reflection being focused on their
own actions, F(6, 108) = 1.18, p = 0.32. Thus, high subjective or objective first-person singular pronoun use was not reflected in a
greater focus on student teachers’ own actions in written reflections (as indicated by manual ratings).
3.2. Do student teachers who experience more stress in the VR session use more subjective and objective first-person singular pronouns in
their written reflections?
To answer our second research question regarding the relationship between student teachers’ stress in the VR session and the use of
subjective and objective first-person singular pronouns, we conducted cross-sectional regression analyses. Our results showed that selfreported stress experienced during the first VR teaching session was not statistically significantly associated with the use of subjective
and objective first-person singular pronouns (Table 2). Gender showed a statistically significant association with subjective and
objective pronoun use, indicating that student teachers who identified as male used subjective and objective first-person singular
pronouns more frequently than student teachers who identified as female. In contrast, physiological stress experienced during the first
VR teaching session was statistically and positively associated with both the use of subjective and the use of objective first-person
singular pronouns. Thus, student teachers who experienced greater physiological stress when teaching in the VR classroom used
more subjective or objective first-person singular pronouns in their written reflections. Beyond physiological stress, associations between gender and the use of subjective or objective first-person pronouns were not statistically significant.
3.3. Do student teachers who use more subjective and objective first-person singular pronouns in their written reflections experience a
greater increase in stress in the second VR session?
Concerning our third research question, we examined whether student teachers who used more subjective and objective firstperson singular pronouns in their written reflection experienced a greater increase in stress in the second VR session. When
regressing the self-reported stress that participants experienced during the second VR teaching session, the use of subjective firstperson singular pronouns was not a statistically significant predictor when controlling for self-reported stress in the first VR teaching session, gender and class size during the second VR teaching session (Table 3, Model 3a). The use of objective first-person singular
pronouns emerged as a statistically significant predictor of self-reported stress during the second VR teaching session (Table 3, Model
3b). This was not in line with our cross-sectional findings and we therefore conducted additional analyses, which revealed that the
association between the use of objective first-person singular pronouns and self-reported stress was explained by the level of physiological stress in the first VR teaching session (Table A2 in Appendix). Concerning the covariates, gender and class size were not
statistically significant predictors of self-reported stress in both models (Models 3a-b, Table 3). Associations between self-reported
stress experienced in the first and in the second VR teaching session were statistically significant, indicating that student teachers
who experienced more self-reported stress in the first VR teaching session reported more self-reported stress in the second VR teaching
session.
Associations between physiological stress during the second VR teaching session and the use of subjective first-person singular
pronouns were statistically significant when controlling for physiological stress experienced during the first VR teaching session and
class size in the second VR session (Table 3, Model 4a). Thus, student teachers who used more subjective first-person singular pronouns
in their written reflections of the first VR teaching session experienced a stronger increase in physiological stress during the second VR
teaching session. We found no association between physiological stress during the second VR teaching session and the use of objective
first-person singular pronouns (Table 3, Model 4b). Regarding the covariates, gender was not a statistically significant predictor of
physiological stress in either model, while a greater class size for the second VR classroom was associated with greater physiological
stress (Models 4a-b, Table 3). Student teachers who experienced greater physiological stress in the first VR teaching session exhibited
more intense physiological stress in the second VR teaching session.
4. Discussion
Negative affect can play a crucial role in student teachers’ reflection, but it has barely been studied (Kleinknecht, 2021). The
Table 1
Descriptive statistics.
N M SD Min Max
Psychological stress VR1 58 5.22 1.70 2.00 9.00
Psychological stress VR2 55 3.97 1.78 1.50 9.00
Physiological stress VR1 40 60.36 22.25 21.56 108.38
Physiological stress VR2 46 56.88 24.40 7.86 97.90
Subjective pronouns VR1 59 3.61 2.16 0.00 9.18
Objective pronouns VR1 59 1.13 1.12 0.00 5.10
Note. Physiological stress = difference between heartrate (BPM) while teaching and baseline heartrate (BPM). Subjective pronouns = relative frequency of subjective first-person singular pronouns in written reflection. Objective pronouns = relative frequency of objective first-person singular
pronouns in written reflection. Gender: 0 = female.
A. Westphal et al.
Computers & Education 212 (2024) 104987
8
(caption on next page)
A. Westphal et al.
Computers & Education 212 (2024) 104987
9
present study applied psychological research on the relationship between negative affect and self-focus—which would potentially be
valuable for diagnostic tools, automated feedback systems, and the improvement of emotion regulation—to the context of student
teachers’ reflections. Initially, we looked at whether student teachers who used more subjective and objective first-person singular
pronouns focused more frequently on themselves rather than on the avatar students in the VR class or on the classroom environment as
a whole. In addition, we examined whether student teachers’ negative affect while teaching in a VR classroom—in terms of higher
self-reported stress and heartrate—affected their self-focus in written reflections operationalized via the subjective and objective use of
first-person singular pronouns. We also explored whether the use of subjective and objective first-person singular pronouns relates to
stress in a subsequent teaching session in a VR classroom.
Initially, we found that there were no differences in manual ratings of focus between student teachers with a low, average, or high
subjective or objective use of first-person singular pronouns. This is in line with findings illustrating that individuals can regard their
own actions and thoughts from a more or less immersed or distanced standpoint, which is reflected in their more or less frequent use of
subjective and objective first-person singular pronouns (Kross & Ayduk, 2008; Kross et al., 2014). While one student teacher, for
instance, described classroom disruptions during his lesson from a more immersed perspective, pointing out how it made him feel
(“Some classroom disturbances made me upset […]. The interruptions kept me from getting back to where I started and from finishing
properly.”), another student started his reflection by taking a more distanced perspective, describing his own actions from a
third-person perspective (“There is a recurring loss of the common thread due to class disruptions and the teacher trying to address
every minor disruption.”). Thus, the use of subjective and objective first-person singular pronouns provides incremental information to
manual ratings of focus in written reflections. Future research examining student teachers’ focus when reflecting on their own teaching
could therefore benefit from incorporating different measures of self-focus.
Fig. 1. a Focus in written reflections from content analysis by use of subjective first-person singular pronouns Note. Low use of subjective firstperson singular pronouns ≤0.5 SD below M. Average use of subjective first-person singular pronouns ≥0.5 SD below M and ≤0.5 SD above M.
High use of subjective first-person singular pronouns ≥0.5 SD above M. 1b Focus in written reflections from content analysis by use of objective firstperson singular pronouns.
Table 2
Predicting first-person singular pronoun use in written reflection by self-reported and physiological stress in first VR teaching session.
Subjective pronouns Objective pronouns
β p 95% CI β p 95% CI
Model 1
Intercept 0.74 0.110 [-0.16, 1.65] 0.17 0.700 [-0.68, 1.02]
Self-reported stress 0.21 0.100 [-0.04, 0.45] 0.20 0.160 [-0.08, 0.47]
Gender 0.28* 0.020 [ 0.05, 0.51] 0.23* 0.040 [ 0.01, 0.45]
R2 0.12 0.09
Model 2
Intercept 0.47 0.240 [-0.31, 1.24] − 0.06 0.850 [-0.66, 0.54]
Physiological stress 0.35*** 0.000 [ 0.13, 0.57] 0.32* 0.010 [ 0.09, 0.55]
Gender 0.24 0.030 [0.02, 0.47] 0.20 0.110 [-0.04, 0.44]
R2 0.18 0.14
Note. Coefficients are standardized. Gender: 0 = female.
Table 3
Predicting self-reported and physiological stress in subsequent VR teaching session by use of first-person singular pronouns in written reflection of
previous VR teaching session.
β p 95% CI β p 95% CI
Self-reported stress VR2 Model 3a Model 3b
Intercept 1.38* 0.020 [ 0.24, 2.52] 1.20* 0.040 [ 0.05, 2.34]
Self-rep. stress VR1 0.37*** <0.001 [ 0.12, 0.61] 0.33* 0.010 [ 0.10, 0.57]
Class size in VR2 − 0.16 0.190 [-0.39, 0.08] − 0.10 0.410 [-0.35, 0.14]
Subj. pronouns VR1 0.06 0.650 [-0.22, 0.35]
Obj. pronouns VR1 0.29* 0.030 [ 0.02, 0.55]
Gender 0.07 0.590 [-0.19, 0.34] 0.04 0.770 [-0.20, 0.27]
R2 0.18 0.23
Physiological stress VR2 Model 4a Model 4b
Intercept − 1.29*** <0.001 [-1.89, − 0.69] − 1.17*** <0.001 [-1.74, − 0.60]
Phys. stress VR1 0.26*** <0.001 [ 0.09, 0.42] 0.25*** <0.001 [ 0.09, 0.41]
Class size in VR2 0.83*** <0.001 [ 0.75, 0.92] 0.84*** <0.001 [ 0.75, 0.93]
Subj. pronouns VR1 0.18* 0.040 [0.01, 0.35]
Obj. pronouns VR1 0.17 0.060 [-0.01, 0.35]
Gender 0.05 0.560 [-0.12, 0.22] 0.05 0.510 [-0.11, 0.21]
R2 0.72 0.71
Note. Coefficients are standardized. Gender: 0 = female. Class size: 0 = small.
A. Westphal et al.
Computers & Education 212 (2024) 104987
10
Our results also indicate that student teachers who experienced more physiological stress when teaching in the VR classroom used
more subjective and objective first-person singular pronouns in their written reflections. This finding is consistent with research
showing that negative affective experiences may provoke the use of subjective and objective first-person singular pronouns as a way to
process negative self-relevant information (Berry-Blunt et al., 2021; Klauke et al., 2020; Kowalski, 2000). However, given their
cross-sectional nature, our results could also indicate that student teachers who use less linguistic distancing—i.e., who are more
immersed into a situation, indicated by a greater use of subjective and objective first-person singular pronouns—experience higher
levels of negative affect, which is also in line with previous findings (Shahane et al., 2023). Our study is the first to confirm that this
association can be replicated in the context of student teachers’ reflections on their own teaching. Our results further amplify existing
research by revealing that this relationship generalizes to a physiological indicator of negative affect, namely to individuals’
heartrates.
In terms of gender differences, our results suggested that student teachers who identified as male used more subjective and
objective first-person singular pronouns than student teachers who identified as female. These gender differences disappeared when
controlling for physiological stress experienced during the VR teaching session. The finding that men used subjective and objective
first-person singular pronouns more frequently than women is not in line with related research that shows a greater tendency in women
to ruminatively self-focus when depressed, and thus experiencing high levels of negative affect (Johnson & Whisman, 2013). It has,
however, been suggested that men’s use of subjective and objective first-person singular pronouns is an indicator of high-arousal
negative distress, while women’s use of first-person singular pronouns is driven by low-arousal negative distress (Tackman et al.,
2019). In our study, student teachers were reflecting on a teaching situation that had the potential to elicit high-arousal negative
distress. Indeed, we found that, when controlling for physiological stress experienced during the VR session, gender differences in the
use of subjective and objective first-person singular pronouns disappeared. These results further support the notion that the use of
first-person singular pronouns may be an indicator of different affective experiences in women and men (Fast & Funder, 2010).
In addition, we found that greater use of subjective first-person singular pronouns led to higher physiological stress in the subsequent VR session. Thus, we found some indication that student teachers who took a more distanced perspective experienced reduced
future stress (as suggested by Berry-Blunt et al., 2021; Zimmermann et al., 2016). This is in line with research showing that individuals
who spontaneously use more linguistic distancing when reflecting on negative and positive events report lower levels of stress in these
and in subsequent situations, and overall greater well-being (Shahane et al., 2023). Despite the fact that the use of subjective and
objective first-person singular pronouns may be a strategy that student teachers use to process negative teaching experiences, this
strategy—referred to as rumination—is considered maladaptive (e.g., Mor & Winquist, 2002). Rumination—i.e., the strategy of
regulating negative mood by repeatedly focusing one’s attention on one’s own negative experiences, and the causes and effects
(Nolen-Hoksema, 1991)—has been associated with depression (Hong, 2007), inefficient problem-solving, and lower self-efficacy
(Lyubomirsky et al., 2003; Reindl et al., 2020). Teachers who ruminate more experience higher levels of stress in the classroom
and are more susceptible to burnout (Koˇsir et al., 2015). We assessed naturally occurring differences in the use of subjective and
objective first-person singular pronouns and may therefore have underestimated the benefits of taking a more distanced perspective.
Future studies with greater sample sizes and power should aim to explore whether a similar effect might emerge for objective
first-person pronouns, for which we found a marginally significant p-value.
In contrast, our data did not support the association between self-reported negative affect and the use of subjective and objective
first-person singular pronouns. Berry-Blunt et al. (2021) proposed that some psychometric units, i.e., “facets, nuances, and items” (p. 8)
might capture I-talk better than others; our self-report measure of stress may not have been ideal in this respect. Moreover, the
self-report was assessed after the VR session, which potentially led to lower congruence between physiological and self-reported stress
responses than when self-report is assessed continuously during the stressful situation (Campbell & Ehlert, 2012).
4.1. Pedagogical implications
While reflection can improve student teachers’ professional vision, and is therefore seen as an important tool in teacher education
(e.g., Stürmer et al., 2013; Weber et al., 2020), less is known about its potential to assist adaptive emotion regulation strategies that
teachers need in order to be able to cope with challenging classroom events (Chang, 2009). Reappraising a challenging situation is seen
as an effective strategy (Gross, 2002) by which teachers change how they think about an event and thereby decrease its emotional
impact (Chang, 2009; Gross, 2022). When engaging in reappraisal, teachers may reduce their use of first-person singular pronouns,
indicating their greater psychological distance to a challenging situation (Nook et al., 2020). Our findings indicate that an increased
self-focus in student teachers’ written reflections—as indicated by a more frequent use of subjective first-person singular pronouns—is
associated with greater physiological stress. Automated feedback systems could build on this finding by identifying student teachers
who repeatedly experience elevated stress and are thus at risk for depression and burnout. This could complement feedback on the
quality of their written reflections (Wulff et al., 2022, 2023). We found some indication that taking a more distanced perspective—as
indicated by less frequent use of subjective first-person pronouns—reduces future stress (as suggested by Berry-Blunt et al., 2021;
Shahane et al., 2023; Zimmermann et al., 2016). Practicing taking a more distanced perspective on negative events is not only seen as
an adaptive emotion-regulation strategy (Kross & Ayduk, 2008), it may over time change a student teacher’s tendency to experience
stress (for a similar suggestion outside the context of teacher education, see Berry-Blunt et al., 2021). A structured practice of reappraisal can facilitate adaptive emotion regulation (Christou-Champi et al., 2015). Thus, training student teachers to use more reappraisal and put more distance between themselves and challenging classroom situations when reflecting on their teaching could be a
viable strategy to help them cope with stress. In addition, Ehring (2020) suggests that ruminative thinking, i.e., focusing one’s
attention on one’s own negative experiences, can be transformed into more adaptive information processing by focusing attention on
A. Westphal et al.
Computers & Education 212 (2024) 104987
11
physical reactions and emotions in a specific situation and fostering self-compassion, which has been shown to be incompatible with
ruminative thinking (Watkins, 2016). Combining video-based reflection on one’s own teaching with reappraisal and
mindfulness-based strategies could be a promising way to foster student teachers’ professional vision, as well as their well-being and
stress-resistance.
4.2. Limitations and future research
The current study has some limitations. The sample size of the study was small. Our results should therefore be replicated using a
larger sample, which could more effectively detect small effects. By providing a highly standardized setting, the VR classroom increases the internal validity of our research findings, which may come at the cost of ecological validity. It has been argued that VR
creates a perceptual illusion and “the real power of VR […] [is that] even though you know it is an illusion, this does not change your
perception or your response to it” (Slater, 2018, p. 2). Some features of the VR environment, such as a realistic display of the environment, a smooth display of motion and view changes, and control of behaviors, are seen as essential to increase the likelihood of
optimal learning in VR (Dalgarno & Lee, 2010; Delamarre et al., 2021). Prior studies showed that student teachers perceived our VR
classroom as realistic and authentic Wiepke et al., 2019, 2021. Student teachers trained in our VR classroom showed similar reflection
processes compared to students reflecting on real classroom videos and showed a substantial increase in reflection-related self-efficacy
over time (Richter et al., 2022). Nevertheless, more validation studies are needed to evaluate the transferability of the positive results
of the participation in a VR learning setting to authentic classrooms. Moreover, future studies should investigate whether our findings
are generalizable to non-virtual classroom environments and to in-service teachers. Studies should also incorporate further self-report
measures of negative affect to “identify […] the smaller psychometric units (e.g., trait facets, nuances, and items) that best capture
I-talk” (Berry-Blunt et al., 2021, p. 8). To better understand the temporal dynamics between negative affect and self-focused attention,
a longitudinal study with multiple measurements is necessary. Such a study design could help explain the extent to which
situation-specific and personal characteristics play a role in the interplay between the use of subjective and objective first-person
singular pronouns and negative affect. The extent to which student teachers can be trained to more professionally process negative
events could be explored by experimentally manipulating the ways in which student teachers describe and evaluate negative classroom
events—taking either a distanced or a self-immersed perspective—by prompting student teachers to use distanced vs. self-referential
language or by instructing them to reframe negative events vs. self-immerse in their emotions (see also Nook et al., 2017). Intervention
studies could be a promising approach to help identify how reflection could be implemented in a way that is beneficial for student
teachers’ professional vision and their emotion regulation, without overwhelming them.
5. Conclusions
Thanks to a standardized VR classroom environment, our study is the first to provide evidence that student teachers’ self-focus in
their written reflections is linked to the stress they experience while teaching. Our multimodal assessment of stress—based on student
teachers’ self-reports and heartrate responses—allowed for a differentiated approach to studying emotions in the VR learning environment. Not only can we show that the association between negative affect and self-focus—measured via I-talk—holds in student
teachers’ written reflections on their own teaching, but our study also adds to previous findings by showing that this link can be
generalized to individuals’ heartrates, representing a physiological indicator of negative affect. These results point to the potential that
reflecting on one’s own teaching may have for practicing adaptive emotion regulation strategies in teacher education programs.
Credit author statement
AW: Conceptualization. Formal analysis. Writing – original draft, Reviewing and Editing, ER: Conceptualization. Investigation.
Data curation. Project administration. Writing- Reviewing and Editing, RL: Writing- Reviewing and Editing, YH: Investigation, Project
administration.
Declaration of competing interest
None.
Data availability
The authors do not have permission to share data.
A. Westphal et al.
Computers & Education 212 (2024) 104987
12
APPENDIX
Table A.1
Descriptive statistics for baseline heartrate and heartrate in VR teaching session
N M SD Min Max
Baseline BPM T1 42 96.92 30.88 44.41 173.62
Baseline BPM T2 50 102.37 30.03 40.34 175.88
BPM in VR session T1 57 161.14 20.44 125.02 196.45
BPM in VR session T2 55 160.65 20.00 115.57 196.59
Note. Measures used to compute physiological stress, i.e., difference in heartrate between baseline and VR session.
Table A.2
Predicting self-reported stress in subsequent VR teaching session by physiological stress and use of firstperson singular pronouns in written reflection of previous VR teaching session
β p 95% CI
Self-reported stress VR2 Model 5a
Intercept 0.39 0.560 [-0.93, 1.71]
Self-reported stress VR1 0.31* 0.010 [ 0.06, 0.55]
Physiological stress VR1 0.29* 0.040 [ 0.01, 0.57]
Class size in VR2 − 0.06 0.640 [-0.31, 0.19]
Objective pronouns VR1 0.24 0.110 [-0.05, 0.54]
Gender 0.06 0.620 [-0.18, 0.30]
R2 0.28
Note. Coefficients are standardized. Gender: 0 = female. Class size: 0 = small.
Fig. A1. VR Classroom from the perspective of student teachers.
A. Westphal et al.
Computers & Education 212 (2024) 104987
13
Fig. A.2. Student teacher teaching in VR Classroom.
References
Bernard, J. D., Baddeley, J. L., Rodriguez, B. F., & Burke, P. A. (2016). Depression, language, and affect: An examination of the influence of baseline depression and
affect induction on language. Journal of Language and Social Psychology, 35(3), 317–326. https://doi.org/10.1177/0261927X15589186
Berry-Blunt, A. K., Holtzman, S. H., Donnellan, M. B., & Mehl, M. R. (2021). The story of “I” tracking: Psychological implications of self-referential language use. Social
and Personality Psychology Compass, 15(2), Article e12647. https://doi.org/10.1111/spc3.12647
Borko, H. (2004). Professional development and teacher learning: Mapping the terrain. Educational Researcher, 33(8), 3–15, 10.3102%2F0013189X033008003.
Borko, H. (2016). Methodological contributions to video-based studies of classroom teaching and learning: A commentary. ZDM, 48(1), 213–218, 10/gfs2vk.
Borko, H., Jacobs, J., Eiteljorg, E., & Pittman, M. E. (2008). Video as a tool for fostering productive discussions in mathematics professional development. Teaching and
Teacher Education, 24(2), 417–436. https://doi.org/10.1016/j.tate.2006.11.012
Campbell, J., & Ehlert, U. (2012). Acute psychosocial stress: Does the emotional stress response correspond with physiological responses? Psychoneuroendocrinology,
37(8), 1111–1134. https://doi.org/10.1016/j.psyneuen.2011.12.010
Chang, M. L. (2009). An appraisal perspective of teacher burnout: Examining the emotional work of teachers. Educational Psychology Review, 21, 193–218. https://doi.
org/10.1007/s10648-009-9106-y
Chernikova, O., Heitzmann, N., Fink, M. C., Timothy, V., Seidel, T., Fischer, F., & DFG Research group COSIMA.. (2020). Facilitating diagnostic competences in higher
education - a meta-analysis in medical and teacher education. Educational Psychology Review, 32, 157–196. https://doi.org/10.1007/s10648-019-09492-2
Christou-Champi, S., Farrow, T. F., & Webb, T. L. (2015). Automatic control of negative emotions: Evidence that structured practice increases the efficiency of emotion
regulation. Cognition & Emotion, 29(2), 319–331. https://doi.org/10.1080/02699931.2014.901213
Dalgarno, B., & Lee, M. J. (2010). What are the learning affordances of 3-D virtual environments? British Journal of Educational Technology, 41(1), 10–32. https://doi.
org/10.14221/ajte.2016v41n1.8
Delamarre, A., Shernoff, E., Buche, C., Frazier, S., Gabbard, J., & Lisetti, C. (2021). The interactive virtual training for teachers (IVT-T) to practice classroom behavior
management. International Journal of Human-Computer Studies, 152, Article 102646. https://doi.org/10.1016/j.ijhcs.2021.102646
Delaney, J. P. A., & Brodie, D. A. (2000). Effects of short-term psychological stress on the time and frequency domains of heart-rate variability. Perceptual and Motor
Skills, 91(2), 515–524. https://doi.org/10.2466/pms.2000.91.2.515
Dunnack, E. S., & Park, C. L. (2009). The effect of an expressive writing intervention on pronouns: The surprising case of I. Journal of Loss & Trauma, 14(6), 436–446.
https://doi.org/10.1080/15325020902925084
Edwards, T., & Holtzman, N. S. (2017). A meta-analysis of correlations between depression and first singular pronoun use. Journal of Research in Personality, 68, 63–68.
https://doi.org/10.1016/j.jrp.2017.02.005
Ehring, T. (2020). Die ruminationsfokussierte Kognitive Verhaltenstherapie [Rumination-focused cognitive-Behavioral Therapy]. Zeitschrift für Psychiatrie, Psychologie
und Psychotherapie, 68(3), 150–159. https://doi.org/10.1024/1661-4747/a000414
Enders, C. K. (2001). The impact of nonnormality on full information maximum-likelihood estimation for structural equation models with missing data. Psychological
Methods, 6, 352–370. https://doi.org/10.1037/1082-989X.6.4.352
Exner, J. E., Jr. (1973). The self focus sentence completion: A study of egocentricity. Journal of Personality Assessment, 37(5), 437–455. https://doi.org/10.1080/
00223891.1973.10119902
Fast, L. A., & Funder, D. C. (2010). Gender differences in the correlates of self-referent word use: Authority, entitlement, and depressive symptoms. Journal of
Personality, 78, 313–338. https://doi.org/10.1111/j.1467-6494.2009.00617.x
Fenigstein, A., Scheier, M. F., & Buss, A. H. (1975). Public and private self-consciousness: Assessment and theory. Journal of Consulting and Clinical Psychology, 43(4),
522–527. https://doi.org/10.1037/h0076760
Goddard, R., O’brien, P., & Goddard, M. (2006). Work environment predictors of beginning teacher burnout. British Educational Research Journal, 32(6), 857–874.
https://doi.org/10.1080/01411920600989511Gold,B.,&Windscheid. J. (2020). Observing 360-degree classroom videos – Effects of video type on presence,
emotions, workload, classroom observations, and ratings of teaching quality. Computers & Education, 156, 103960.
Gold, B., & Windscheid, J. (2020). Observing 360-degree classroom videos–Effects of video type on presence, emotions, workload, classroom observations, and ratings
of teaching quality. Computers & Education, 156, 103960. https://doi.org/10.1016/j.compedu.2020.103960.
Gross, J. J. (2002). Emotion regulation: Affective, cognitive, and social consequences. Psychophysiology, 39(3), 281–291, 10.1017.S0048577201393198.
Hallquist, M. N., & Wiley, J. F. (2018). MplusAutomation: An R package for facilitating large-scale latent variable analyses in Mplus. Structural Equation Modeling: A
Multidisciplinary Journal, 25(4), 621–638. https://doi.org/10.1080/10705511.2017.1402334
Hong, R. Y. (2007). Worry and rumination: Differential associations with anxious and depressive symptoms and coping behavior. Behavior Research and Therapy, 45
(2), 277–290. https://doi.org/10.1016/j.brat.2006.03.006
Huang, Y., Richter, E., Kleickmann, T., & Richter, D. (2022). Class size affects preservice teachers’ physiological and psychological stress reactions: An experiment in a
virtual reality classroom. Computers & Education, 184, 104503. https://doi.org/10.1016/j.compedu.2022.104503.
Huang, Y., Richter, E., Kleickmann, T., Wiepke, A., & Richter, D. (2021). Classroom complexity affects student teachers’ behavior in a VR classroom. Computers &
Education, 163, 104100. https://doi.org/10.1016/j.compedu.2020.104100.
Hultell, D., Melin, B., & Gustavsson, J. P. (2013). Getting personal with teacher burnout: A longitudinal study on the development of burnout using a person-based
approach. Teaching and Teacher Education, 32, 75–86. https://doi.org/10.1016/j.tate.2013.01.007
James, W. (1890). The principles of psychology. New York, NY: Holt & Co.
Johnson, D. P., & Whisman, M. A. (2013). Gender differences in rumination: A meta-analysis. Personality and Individual Differences, 55, 367–374. https://doi.org/
10.1016/j.paid.2013.03.019
Ke, F., & Xu, X. (2020). Virtual reality simulation-based learning of teaching with alternative perspectives taking. British Journal of Educational Technology, 51(6),
2544–2557. https://doi.org/10.1111/bjet.12936
A. Westphal et al.
Computers & Education 212 (2024) 104987
14
Kern, M. L., Eichstaedt, J. C., Schwartz, H. A., Dziurzynski, L., Ungar, L. H., Stillwell, D. J., Kosinski, M., Ramones, S. M., & Seligman, M. E. P. (2014). The online social
self: An open vocabulary approach to personality. Assessment, 21(2), 158–169. https://doi.org/10.1177/1073191113514104
Klauke, F., Müller-Frommeyer, L. C., & Kauffeld, S. (2020). Writing about the silence: Identifying the language of ostracism. Journal of Language and Social Psychology,
39(5–6), 751–763. https://doi.org/10.1177/0261927X19884599
Kleinknecht, M. (2021). Emotionen von Lehrkraften ¨ in unterrichtsvideobasierten Fortbildungen [Teachers’ emotions in video-based training]. In M. Gl¨
aser-Zikuda,
F. Hofmann, & V. Frederking (Eds.), Emotionen im Unterricht: Psychologische, padagogische ¨ und fachdidaktische Perspektiven [Emotions in the classroom: Psychological,
pedagogical and didactical perspectives] (pp. 231–243). Stuttgart, Germany: Kohlhammer Verlag, 2021.
Kleinknecht, M., & Groschner, ¨ A. (2016). Fostering preservice teachers’ noticing with structured video feedback: Results of an online- and video-based intervention
study. Teaching and Teacher Education, 59, 45–56. https://doi.org/10.1016/j.tate.2016.05.020
Kleinknecht, M., & Schneider, J. (2013). What do teachers think and feel when analyzing videos of themselves and other teachers teaching? Teaching and Teacher
Education, 33, 13–23. https://doi.org/10.1016/j.tate.2013.02.002
Koˇsir, K., Tement, S., Licardo, M., & Habe, K. (2015). Two sides of the same coin? The role of rumination and reflection in elementary school teachers’ classroom stress
and burnout. Teaching and Teacher Education, 47, 131–141.
Kowalski, R. M. (2000). “I was only kidding!” Victims’ and perpetrators’ perceptions of teasing. Personality and Social Psychology Bulletin, 26, 231–241. https://doi.
org/10.1177/0146167200264009
Kross, E., & Ayduk, O. (2008). Facilitating adaptive emotional analysis: Distinguishing distanced-analysis of depressive experiences from immersed-analysis and
distraction. Personality and Social Psychology Bulletin, 34(7), 924–938, 10.1177%2F0146167208315938.
Kross, E., Bruehlman-Senecal, E., Park, J., Burson, A., Dougherty, A., Shablack, H., Bremner, R., Moser, J., & Ayduk, O. (2014). Self-talk as a regulatory mechanism:
How you do it matters. Journal of Personality and Social Psychology, 106(2), 304–324. https://doi.org/10.1037/a0035173
Lin, Y. C. (2023). Using virtual classroom simulations in a mathematics methods course to develop pre-service primary mathematics teachers’ noticing skills. British
Journal of Educational Technology, 54(3), 734–753. https://doi.org/10.1111/bjet.13291
Little, R. J. A. (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83(404),
1198–1202. https://doi.org/10.2307/2290157
Lohse-Bossenz, H., Schonknecht, ¨ L., & Brandtner, M. (2019). Entwicklung und Validierung eines Fragebogens zur Erfassung Reflexionsbezogener Selbstwirksamkeit
von Lehrkraften ¨ im Vorbereitungsdienst [Development and validation of a questionnaire assessing pre-service teachers’ self-efficacy in reflection]. Empirische
Padagogik, ¨ 33(2), 164–179.
Lugrin, J.-L., Latoschik, M. E., Habel, M., Roth, D., Seufert, C., & Grafe, S. (2016). Breaking bad behaviors: A new tool for learning classroom management using
virtual reality. Frontiers in ICT, 3(26). https://doi.org/10.3389/fict.2016.00026
Lyubomirsky, S., Kasri, F., & Zehm, K. (2003). Dysphoric rumination impairs concentration on academic tasks. Cognitive Therapy and Research, 27(3), 309–330.
https://doi.org/10.1023/A:1023918517378
Mehl, M. R., Gosling, S. D., & Pennebaker, J. W. (2006). Personality in its natural habitat: Manifestations and implicit folk theories of personality in daily life. Journal
of Personality and Social Psychology, 90(5), 862–877. https://doi.org/10.1037/0022-3514.90.5.862
Mor, N., & Winquist, J. (2002). Self-focused attention and negative affect: A meta-analysis. Psychological Bulletin, 128(4), 638–662. https://doi.org/10.1037/0033-
2909.128.4.638
Nolen-Hoeksema, S. (1991). Responses to depression and their effects on the duration of depressive episodes. Journal of Abnormal Psychology, 100(4), 569–582.
https://doi.org/10.1037/0021-843X.100.4.569
Nook, E. C., Schleider, J. L., & Somerville, L. H. (2017). A linguistic signature of psychological distancing in emotion regulation. Journal of Experimental Psychology:
General, 146(3), 337–346. https://doi.org/10.1037/xge0000263
Nook, E. C., Vidal Bustamante, C. M., Cho, H. Y., & Somerville, L. H. (2020). Use of linguistic distancing and cognitive reappraisal strategies during emotion regulation
in children, adolescents, and young adults. Emotion, 20(4), 525. https://doi.org/10.1037/emo0000570
Pendergast, D., O’Brien, M., Prestridge, S., & Exley, B. (2022). Self-efficacy in a 3-dimensional virtual reality classroom—initial teacher education students’
experiences. Education Sciences, 12(6), 368. https://doi.org/10.3390/educsci12060368
Prilop, C. N., Weber, K. E., & Kleinknecht, M. (2021). The role of expert feedback in the development of pre-service teachers’ professional vision of classroom
management in an online blended learning environment. Teaching and Teacher Education, 99, Article 103276. https://doi.org/10.1016/j.tate.2020.103276
Reindl, M., Tulis, M., & Dresel, M. (2020). Profiles of emotional and motivational self-regulation following errors: Associations with learning. Learning and Individual
Differences, 77, Article 101806. https://doi.org/10.1016/j.lindif.2019.101806
Remacle, A., Bouchard, S., & Morsomme, D. (2023). Can teaching simulations in a virtual classroom help trainee teachers to develop oral communication skills and
self-efficacy? A randomized controlled trial. Computers & Education, 200, Article 104808. https://doi.org/10.1016/j.compedu.2023.104808
Richter, E., Hußner, I., Huang, Y., Richter, D., & Lazarides, R. (2022). Video-based reflection in teacher education: Comparing virtual reality and real classroom videos.
Computers & Education, 24(3), Article 104601. https://doi.org/10.1016/j.compedu.2022.104601
Rude, S., Gortner, E.-M., & Pennebaker, J. W. (2004). Language use of depressed and depression-vulnerable college students. Cognition & Emotion, 18(8), 1121–1133.
https://doi.org/10.1080/02699930441000030
Seidel, T., Stürmer, K., Blomberg, G., Kobarg, M., & Schwindt, K. (2011). Teacher learning from analysis of videotaped classroom situations: Does it make a difference
whether teachers observe their own teaching or that of others? Teaching and Teacher Education, 27(2), 259–267. https://doi.org/10.1016/j.tate.2010.08.009
Seufert, C., Oberdorfer, ¨ S., Roth, A., Grafe, S., Lugrin, J. L., & Latoschik, M. E. (2022). Classroom management competency enhancement for student teachers using a
fully immersive virtual classroom. Computers & Education, 179, Article 104410. https://doi.org/10.1016/j.compedu.2021.104410
Shahane, A. D., Godfrey, D. A., & Denny, B. T. (2023). Predicting real-world emotion and health from spontaneously assessed linguistic distancing using novel scalable
technology. Emotion. Advance online publication. https://doi.org/10.1037/emo0001211
Stürmer, K., Konings, ¨ K. D., & Seidel, T. (2013). Declarative knowledge and professional vision in teacher education: Effect of courses in teaching and learning. British
Journal of Educational Psychology, 83(3), 467–483. https://doi.org/10.1111/j.2044-8279.2012.02075.x
Tackman, A. M., Sbarra, D. A., Carey, A. L., Donnellan, M. B., Horn, A. B., Holtzman, N. S., Edwards, T. S., Pennebaker, J. W., & Mehl, M. R. (2019). Depression,
negative emotionality, and self-referential language: A multi-lab, multi-measure, and multi-language-task research synthesis. Journal of Personality and Social
Psychology, 116(5), 817–834. https://doi.org/10.1037/pspp0000187
Tierney, N., Cook, D., McBain, M., & Fay, C. (2021). naniar: Data structures, summaries, and visualisations for missing data. R package version 0.6.1 https://CRAN.Rproject.org/package=naniarv.
van Es, E. A., & Sherin, M. G. (2002). Learning to notice: Scaffolding new teachers’ interpretations of classroom interactions. Journal of Technology and Teacher
Education, 10(4), 571–596. https://www.learntechlib.org/primary/p/9171/.
van Es, E. A., & Sherin, M. G. (2008). Mathematics teachers’ “learning to notice” in the context of a video club. Teaching and Teacher Education, 24(2), 244–276.
https://doi.org/10.1016/j.tate.2006.11.005
Voss, T., & Kunter, M. (2020). “Reality shock” of beginning teachers? Changes in teacher candidates’ emotional exhaustion and constructivist-oriented beliefs. Journal
of Teacher Education, 71(3), 292–306. https://doi.org/10.1177/0022487119839700
Watkins, E. R. (2016). Rumination-focused cognitive-behavioral therapy for depression. New York: Guilford.
Weber, K. E., Prilop, C. N., Viehoff, S., Gold, B., & Kleinknecht, M. (2020). Fordert ¨ eine videobasierte intervention im praktikum die professionelle wahrnehmung von
Klassenführung?—eine quantitativ-inhaltsanalytische messung von Subprozessen professioneller wahrnehmung [does a video-based practicum intervention
provide a realistic picture of classroom management? A quantitative content analysis of the subprocesses of professional awareness]. Zeitschrift für
Erziehungswissenschaft, 23, 343–365. https://doi.org/10.1007/s11618-020-00939-9
Wegner, D. M., & Giuliano, T. (1980). Arousal-induced attention to self. Journal of Personality and Social Psychology, 38(5), 719–726. https://doi.org/10.1037/0022-
3514.38.5.719
A. Westphal et al.
Computers & Education 212 (2024) 104987
15
Westphal, A., Kalinowski, E., Hoferichter, C. J., & Vock, M. (2022). K− 12 teachers’ stress and burnout during the COVID-19 pandemic: A systematic review. Frontiers
in psychology, 13, 920326. https://doi.org/10.3389/fpsyg.2022.920326.
Wickham, H. (2019). stringr: Simple, consistent wrappers for common string operations. R package version 1.4.0. https://CRAN.R-project.org/package=stringr.
Wiepke, A., Heinemann, B., Lucke, U., & Schroeder, U. (2021). Jenseits des eigenen Klassenzimmers: Perspektiven & Weiterentwicklungen des VR-Klassenzimmers
[Beyond the own classroom: Perspectives & further developments of the VR classroom.]. DELFI 2021 - Die 19. Fachtagung Bildungstechnologien [Educational
technologies symposium], 331–336. http://dl.gi.de/handle/20.500.12116/37031.
Wolff, C. E., Jarodzka, H., van den Bogert, N., & Boshuizen, H. P. A. (2016). Teacher vision: Expert and novice teachers’ perception of problematic classroom
management scenes. Instructional Science, 44(3), 243–265, 10/f8tcpm.
Wulff, P., Westphal, A., Mientus, L., Nowak, A., & Borowski, A. (2023, January). Enhancing writing analytics in science education research with machine learning and
natural language processing—Formative assessment of science and non-science preservice teachers’ written reflections. In Frontiers in Education, 7, Article
1061461. https://doi.org/10.3389/feduc.2022.1061461. Frontiers.
Wulff, P., Buschhüter, D., Westphal, A., Mientus, L., Nowak, A., & Borowski, A. (2022). Bridging the gap between qualitative and quantitative assessment in science
education research with machine learning—A case for pretrained language models-based clustering. Journal of Science Education and Technology, 31(4), 490–513.
https://doi.org/10.1007/s10956-022-09969-w.
Yarkoni, T. (2010). Personality in 100,000 words: A large-scale analysis of personality and word use among bloggers. Journal of Research in Personality, 44(3),
363–373. https://doi.org/10.1016/j.jrp.2010.04.001
Yee, N., Harris, H., Jabon, M., & Bailenson, J. N. (2011). The expression of personality in virtual worlds. Social Psychological and Personality Science, 2(1), 5–12.
https://doi.org/10.1177/1948550610379056
Zhang, M., Lundeberg, M., Koehler, M. J., & Eberhardt, J. (2011). Understanding affordances and challenges of three types of video for teacher professional
development. Teaching and Teacher Education, 27(2), 454–462. https://doi.org/10.1016/j.tate.2010.09.015
Zimmermann, J., Brockmeyer, T., Hunn, M., Schauenburg, H., & Wolf, M. (2016). First-person pronoun use in spoken language as a predictor of future depressive
symptoms: Preliminary evidence from a clinical sample of depressed patients. Clinical Psychology & Psychotherapy, 24(2), 384–391. https://doi.org/10.1002/
cpp.2006
Wiepke, A., Richter, E., Zender, R., & Richter, D. (2019). Einsatz von Virtual Reality zum Aufbau von Klassenmanagement-Kompetenzen im Lehramtsstudium [Use of
virtual reality for training student teachers’ classroom management competencies]. DELFI, 2019. https://doi.org/10.18420/delfi2019_319
A. Westphal et al.
View publication stats | Write the answer in one paragraph, using full sentences. Use only the document provided. Use language that is easy to understand.
What are the negative and positive aspects of virtual teaching for the instructors?
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/377123871
More I-talk in student teachers’ written reflections indicates higher stress
during VR teaching
Article in Computers & Education · April 2024
DOI: 10.1016/j.compedu.2024.104987
CITATIONS
0
READS
70
4 authors:
Andrea Westphal
University of Greifswald
53 PUBLICATIONS 305 CITATIONS
SEE PROFILE
Eric Richter
Universität Potsdam
45 PUBLICATIONS 495 CITATIONS
SEE PROFILE
Rebecca Lazarides
Universität Potsdam
145 PUBLICATIONS 1,980 CITATIONS
SEE PROFILE
Yizhen Huang
Universität Potsdam
19 PUBLICATIONS 208 CITATIONS
SEE PROFILE
All content following this page was uploaded by Eric Richter on 12 January 2024.
The user has requested enhancement of the downloaded file.
Computers & Education 212 (2024) 104987
Available online 3 January 2024
0360-1315/© 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC license
(http://creativecommons.org/licenses/by-nc/4.0/).
More I-talk in student teachers’ written reflections indicates
higher stress during VR teaching
Andrea Westphal a,*
, Eric Richter b
, Rebecca Lazarides b
, Yizhen Huang b
a University of Greifswald, Department of Education, Steinbeckerstr. 15, 17487, Greifswald, Germany b University of Potsdam, Department of Education, Karl-Liebknecht-Straße 24-25, 14476, Potsdam, Germany
ARTICLE INFO
Keywords:
Augmented and virtual reality
Improving classroom teaching
Teacher professional development
ABSTRACT
Video-based reflection on one’s own teaching represents a crucial tool in teacher education. When
student teachers reflect on negative classroom events, it elicits “self-focused attention,” which has
been associated with more intense negative emotionality. Self-focused attention can be quantitatively captured using first-person singular pronouns (“I,” “me,” “my”) in written reflections by,
for instance, student teachers. What is unclear is whether student teachers’ use of these firstperson singular pronouns in their written reflections is linked to and predicts their negative affective experiences during teaching. For the present study, a fully immersive virtual reality (VR)
classroom was implemented in which student teachers taught a lesson, provided written reflections on their teaching, and then taught a second lesson. We measured N = 59 student
teachers’ self-reported stress and heartrate responses while teaching in the VR classroom and
determined the percentage of first-person singular pronouns in their written reflections. Firstly,
our results showed that the use of first-person singular pronouns provides incremental information on manual ratings of student teachers’ foci in their written reflections. Secondly, student
teachers’ heartrates during instruction—a measure of physiological stress—were associated with
the use of first-person singular pronouns in subsequent written reflections. Thirdly, the use of
first-person singular pronouns predicted the increase in physiological stress from the first to the
second round of VR teaching. We discuss implications for automated feedback and for designing
reflective tasks.
1. Introduction
Teaching is often seen as a challenging profession (Chang, 2009; Westphal et al., 2022; . The transition to school practice, at least
when it takes place in real school classrooms, is especially demanding for student teachers1 (Goddard et al., 2006; Hultell et al., 2013;
Voss & Kunter, 2020). Fully immersive virtual reality (VR) classrooms provide a safe environment for student teachers to gain
hands-on teaching experience (Gold & Windscheid, 2020 ; Lin, 2023; Lugrin et al., 2016; Pendergast et al., 2022; Remacle et al., 2023;
Richter et al., 2022; Seufert et al., 2022). Ke and Xu (2020) suggested that active learning processes (“diving in”) and reflective
* Corresponding author.
E-mail addresses: [email protected] (A. Westphal), [email protected] (E. Richter), rebecca.lazarides@uni-potsdam.
de (R. Lazarides), [email protected] (Y. Huang). 1 In our study, the term “student teachers” refers to those prospective teachers who have not yet begun the supervised teaching portion of their
teacher education program.
Contents lists available at ScienceDirect
Computers & Education
journal homepage: www.elsevier.com/locate/compedu
https://doi.org/10.1016/j.compedu.2024.104987
Received 21 November 2022; Received in revised form 11 November 2023; Accepted 2 January 2024
Computers & Education 212 (2024) 104987
2
learning processes (“stepping out”) can be easily combined in VR classrooms. Thus, it may be useful to practice reflecting on classroom
situations from a more distanced perspective using VR classrooms early on in teacher education to help student teachers make a
smooth transition into the classroom. Because the setting is standardized and similar for all student teachers, VR classrooms allow us to
more accurately research the contributing factors and consequences of student teachers’ self-reflection; the highly controllable setting
increases the internal validity of research findings (Huang et al., 2021; Richter et al., 2022). This is of particular interest for research
looking at the interrelationships between student teachers’ affects in relation to their classroom experiences and their reflections upon
those experiences. VR classrooms allow us, for instance, to apply a finding from experimental psychology to the field of teacher education. Research in the fields of experimental psychology and clinical psychology has shown that self-focus after negative events is
accompanied by greater negative affect (e.g., Mor & Winquist, 2002). This finding has been explained by dysfunctional
emotion-regulation strategies (e.g., Nook et al., 2017). Applied to the context of teacher education, these findings may imply that
student teachers who are less skilled at transitioning between a more immersed perspective and a more distanced perspective on
challenging classroom events may experience more negative affect. The degree of a person’s self-focus can be determined based on the
frequency with which they use first-person singular pronouns (e.g., “I,” “me,” “my”); this has also been called “I-talk” (Tackman et al.,
2019). It would be highly relevant to examine this link between self-focused attention in written reflections—as reflected by more
frequent I-talk—and negative affect during instruction in the field of teacher education, because knowledge about the link between the
use of first-person singular pronouns and negative affect could be implemented in automated feedback systems to help identify student
teachers at risk of intense stress and burnout, allowing them to be offered personalized feedback and support. To date, however, these
questions have not been explored.
To examine whether student teachers’ use of first-person singular pronouns is an indicator of the stress level they experience in the
classroom, and whether it can predict changes in a student teachers’ stress, a highly standardized teaching situation is preferable,
because it ensures that all student teachers under study are reflecting on the same learning environment and on similar classroom
events at all measurement occasions. A VR classroom creates just such a setting. In the present study, class size in the VR classroom was
manipulated to induce higher levels of stress (large-class-size condition with a higher number of student avatars) and lower levels of
stress (small-class-size condition with a lower number of student avatars). Initially, we checked whether the use of first-person pronouns in written reflections provides incremental information about subsequent manual ratings of student-teachers’ foci (whether on
their own actions vs. student actions vs. the learning environment) in these reflections. In addition, we examine whether student
teachers who experience more self-reported stress and physiological stress (as measured by heartrate response) while teaching in a VR
classroom use more first-person singular pronouns in written reflections on their teaching. We also explore whether student teachers’
use of first-person singular pronouns in written reflections predicts the increase in stress in a subsequent VR lesson.
1.1. Reflection in teacher education: Technological advances
“Reflection” has been defined as “deliberate, purposeful, metacognitive thinking and/or action” (Koˇsir et al., 2015, p. 113) that is
believed to enhance instructional quality (Chernikova et al., 2020) by improving student teachers’ “noticing” and “knowledge-based
reasoning” (Stürmer et al., 2013; van Es & Sherin, 2008). Noticing is understood as a teachers’ ability to focus their attention on
relevant classroom events (van Es & Sherin, 2002). Knowledge-based reasoning is defined as teachers’ ability to apply their professional knowledge in order to interpret these classroom events (Borko, 2004; van Es & Sherin, 2002). In teacher education programs,
reflection is implemented in different ways, with student teachers reflecting on classroom videos of other teachers or on video recordings of their own teaching (Kleinknecht & Schneider, 2013). Technological advances have made it possible to design VR classrooms that student teachers experience as realistic and authentic classroom settings (Huang et al., 2021; Wiepke et al., 2019, allowing
them to practice and subsequently reflect on video recordings of their own teaching. Beyond the practical advantages for student
teacher education (e.g., the approval of students, parents, educators, and administrators is not required), an important benefit of VR
classrooms is the standardized classroom setting in which all student teachers experience similar classroom events for which they can
be prepared in advance. This enables teacher educators to more easily provide relevant professional knowledge tailored to managing
critical classroom events in the VR classroom; when reflecting on video recordings of their own teaching in the VR setting, student
teachers may thus be more able to apply relevant professional knowledge.
Video-based reflection generally involves written or oral reflections after viewing classroom videos; it is recommended that these
reflections follow a three-step process (e.g., Prilop et al., 2021). Firstly, student teachers are instructed to describe the relevant
classroom events; secondly, they are asked to evaluate and interpret these events; and, thirdly, they are required to identify alternative
classroom behaviors (e.g., Prilop et al., 2021). When describing relevant classroom events, student teachers may focus either on their
own actions, students’ actions, or on the learning environment as a whole (Kleinknecht & Schneider, 2013; Lohse-Bossenz et al., 2019).
Previous research has been resoundingly positive on the benefits of video-based reflection for student teachers’ professional vision (e.
g., Stürmer et al., 2013; Weber et al., 2020). Very few studies shed light on the affective experiences associated with video-based
reflection (Kleinknecht, 2021). Although Kleinknecht and Schneider (2013) suggested that reflecting on other teachers’ videos can
induce more negative affect than reflecting on one’s own teaching videos, several studies have shown that reflecting on one’s own
videos elicits intense emotional involvement (Borko et al., 2008; Seidel et al., 2011; Zhang et al., 2011). What is unclear, however, is to
what extent student teachers’ attentional focus (i.e., focusing on own thoughts, actions, or emotions) in the video-based reflection of
their own videos relates to their experiences of negative affect or stress.
A. Westphal et al.
Computers & Education 212 (2024) 104987
3
1.2. “I-talk” as a linguistic marker of self-focused attention and negative affect
In the fields of experimental psychology and clinical psychology, there is increasing empirical evidence indicating that there is a
relationship between self-focused attention and greater negative affect (e.g., Mor & Winquist, 2002; Nook et al., 2017). For instance,
studies using correlational designs showed that self-report measures of self-focus—such as the Public and Private Self-Consciousness
Scale (Fenigstein et al., 1975), but also sentence completion tasks (Exner, 1973; Wegner & Giuliano, 1980)—are associated with
self-report measures of state or trait negative affect in non-clinical and clinical samples (for an overview, see the meta-analysis by Mor
& Winquist, 2002). Other studies showed that self-referential language (so-called “I-talk”)—as a linguistic marker of an individual’s
self-focus—is associated with more intense negative emotionality in non-clinical samples (Kern et al., 2014; Mehl et al., 2006; Yarkoni,
2010; Yee et al., 2011; see also meta-analysis by Edwards & Holtzman, 2017) and is a marker of depression (Dunnack & Park, 2009;
Rude et al., 2004; Zimmermann et al., 2016). An extensive multi-lab multi-language study with data from more than 4700 participants
recently confirmed that the use of first-person singular pronouns is linked to more intense negative emotionality and depression
(Tackman et al., 2019). The association between I-talk and depression has been explained to some extent by negative emotionality
(Tackman et al., 2019). This indicates that “I-talk” might reflect a broader dispositional tendency towards feelings of distress, but “[t]
his possibility is a topic of ongoing research” (Berry-Blunt et al., 2021, p. 5).
Although frequent I-talk is seen as maladaptive, it may be a way of processing negative affect (e.g., after receiving deprecatory
information about oneself), as a recent literature review concluded (Berry-Blunt et al., 2021). This would suggest that greater negative
affect provokes I-talk. Experimental research did not find any evidence that inducing negative affect by showing participants negative
pictures led to more I-talk (e.g., Bernard, Baddeley, Rodriguez, & Burke, 2016). In line with Bernard, Baddeley, Rodriguez, and Burke
(2016) suggestion, it may be the case that negative affect only leads to more I-talk when elicited by self-deprecating information. This
proposition is consistent with research on the negative affective experiences of teasing and ostracism, which do lead to more frequent
I-talk (Klauke et al., 2020; Kowalski, 2000).
The reverse may also be true, however. Taking a more distanced perspective—as indicated by less I-talk—may over time change an
individual’s dispositional negative emotionality (Berry-Blunt et al., 2021). Distancing, i.e., shifting one’s perspective to be more
“distant” from or less immersed in a negative event, is an adaptive emotion-regulation strategy that is characterized by a lower level of
self-focused attention and can help reduce negative affect (Kross & Ayduk, 2008). The suggestion that repeated distancing may reduce
an individual’s tendency to experience negative affect (Berry-Blunt et al., 2021) would explain why I-talk predicts future depressive
symptoms in patients (Dunnack & Park, 2009; Zimmermann et al., 2016). Even in the short term, less I-talk may lead to less negative
affect: Experimental research confirmed that distancing—i.e., using no I-talk when talking about one’s own emotions while preparing a
stressful speech—can lessen negative affect after having given the speech (Kross et al., 2014, Study 3). Thus, the causal direction of the
link between I-talk and negative emotionality is less clear, but there may be a bidirectional relationship.
1.3. Gender differences in the association between self-focus and negative affect
Based on meta-analytical findings that women exhibit a greater tendency to ruminatively self-focus than men when experiencing
depression (Johnson & Whisman, 2013), it has been suggested that the positive association between negative emotionality and the use
of first-person singular pronouns is larger for women than it is for men (e.g., Tackman et al., 2019). When examining these gender
differences, Tackman et al. (2019) underlined that the use of first-person singular pronouns seems to be driven by low-arousal negative
distress in women. In contrast, it appears that men’s use of first-person singular pronouns is driven by high-arousal negative distress
(Tackman et al., 2019). Thus, the use of first-person singular pronouns may be an indicator of different affective experiences in women
and men (Fast & Funder, 2010) and, thus, gender should be taken into account when studying the relationship between self-focus and
negative affect.
1.4. Distinguishing different forms of self-focus in written reflections
Different strands of research differ in their operationalization of self-focus. In research on reflection in teacher training, manual
ratings indicate the extent to which student teachers focus on their own actions and thoughts, on students’ actions, or on the learning
environment (e.g., Kleinknecht & Schneider, 2013). In experimental psychology, self-focus is often operationalized via the use of
first-person singular pronouns (Berry-Blunt et al., 2021; Mor & Winquist, 2002). This strand of research has argued that individuals can
take a more or less immersed or distanced perspective on their own actions and thoughts by using more or fewer first-person singular
pronouns (Kross & Ayduk, 2008; Kross et al., 2014). When reflecting on their own teaching, some student teachers may distance
themselves from the experience and therefore rarely use first-person singular pronouns, while others may immerse themselves in the
experience and use first-person singular pronouns more frequently. As such, both indicators of self-focus (qualitative ratings vs.
first-person singular pronouns) should provide incremental information about student teachers’ self-focus in their written reflections.
For student teachers taking a more immersive perspective on their own teaching, objective first-person singular pronouns (i.e.,
“me,” “myself”) may reflect a more dysfunctional form of self-focus than subjective first-person singular pronouns (i.e., “I”) (Zimmermann et al., 2016). While subjective pronouns reflect an “active or self-as-actor form of self-focus,” objective pronouns reflect a
“passive or self-as-target form of self-focus” (James, 1890; Tackman et al., 2019, p. 819) that may indicate an even more detrimental
style of processing self-relevant information (Zimmermann et al., 2016; see also more and less dysfunctional questions in Ehring,
2020). This poses the question of whether the relationship between the use of first-person pronouns and negative affect is driven
mainly by objective first-person pronouns (Zimmermann et al., 2016). As such, “it is important to evaluate whether and how the
A. Westphal et al.
Computers & Education 212 (2024) 104987
4
association between depression and I-talk varies as a function of first-person singular pronoun type” (Tackman et al., 2019, p. 819).
Previous evidence on whether subjective and objective first-person pronouns differentially relate to negative affect is mixed (there is
support in the study by Zimmermann et al., 2016; but no or inconsistent differences in the studies by Dunnack & Park, 2009; Tackman
et al., 2019). But the distinction between subjective and objective first-person singular pronouns appears to be essential in our study, in
which the relationship between the use of first-person pronouns and negative affect is examined for the first time in the context of
teacher training.
1.5. Present study
Negative emotionality may be critical for student teachers when they reflect on their own teaching, yet research has rarely
concentrated on the role of negative emotionality for reflective processes (Kleinknecht, 2021). Meanwhile, research in the field of
experimental psychology indicates that negative affect may provoke self-focused attention as indicated by the use of first-person
singular pronouns (Berry-Blunt et al., 2021). Moreover, self-focused attention may also increase the tendency towards experiencing
negative affect (Berry-Blunt et al., 2021). What we do not know is whether this applies to the context of teacher education where
student teachers reflect on their own teaching. Studying this link between self-focus and negative affect in the context of student
teachers’ written reflections can provide valuable cues for diagnostic tools, automated feedback systems, and the improvement of
student teachers’ professional self-regulation. We used a VR classroom setting, and thus a highly standardized teaching situation, to
ensure that student teachers were reflecting on similar classroom events as we examined the following research questions:
(1) Do student teachers who exhibit a greater focus on their own actions (instead of students’ actions or the classroom environment)
use more subjective and objective first-person singular pronouns in written reflections on their teaching?
Here, the assumption could be that those student teachers who use subjective and objective first-person singular pronouns more
frequently also tend to focus on themselves rather than on the students in class or on the classroom environment. However, there is no
previous research combining manual ratings of student teachers’ foci (whether on their own actions vs. student actions vs. the learning
environment) with student teachers’ subjective and objective use of first-person singular pronouns when reflecting on their own
teaching. Thus, it is unclear to what extent manual ratings of student teachers’ foci correspond to the subjective and objective use of
first-person pronouns in written reflections on their teaching. We seek to address this research gap as an exploratory question.
(2) Do student teachers who experience more self-reported stress and physiological stress (as measured by heartrate response)
while teaching in a VR classroom use more subjective and objective first-person singular pronouns in written reflections of their
teaching?
Building on empirical evidence showing that negative affect may provoke self-focused attention, we analyze whether student
teachers who experience higher levels of negative affect when teaching in the VR classroom more frequently use subjective and
objective first-person singular pronouns in their written reflections. The class size in the VR classroom, i.e., the number of student
avatars, was manipulated to evoke higher levels (large-class-size condition) and lower levels of negative affect (small-class-size
condition(Huang et al., 2022). We hypothesize that student teachers who experience more negative affect—operationalized via
self-reported and physiological stress—when teaching in the VR classroom for the first time will use more subjective and objective
first-person singular pronouns in their written reflections.
(3) Does student teachers’ use of subjective and objective first-person singular pronouns in written reflections of their teaching
predict their increase in stress in a subsequent VR lesson?
We postulate that student teachers who use more subjective and objective first-person singular pronouns in their written reflections
will experience a greater increase in self-reported and physiological stress in the second VR classroom (as compared to the stress levels
experienced when teaching in the VR classroom for the first time).
2. Material and methods
2.1. Sample and procedure
Participants were N = 65 student teachers enrolled at the University of (anonymized for review) in Germany. Four of the
participating students did not hand in their written reflection and two additional students did not participate in the second VR practice
session, limiting our analyses to n = 59 students. Student teachers were on average 24 years old (SD = 4.57) and 49% identified as
female, 51% as male, none as diverse. Most students were third-year bachelor students (58%; second-year: 27%; fourth-year: 14%).
These student teachers participated in a weekly seminar on classroom management. The seminar included two 10-min practice sessions in a VR classroom that took place two weeks apart. Both VR practice sessions followed a standardized procedure. Participants
were first given a brief standardized audio introduction on how to interact with the VR environment. Participants were then given a
brief lecture about an a priori determined topic in the VR classroom to deliver to avatar students. During the first VR teaching
experience, participants taught about the US electoral system and the 2020 US election. For the second VR teaching experience,
A. Westphal et al.
Computers & Education 212 (2024) 104987
5
participants taught about sustainability. All the instructional materials that participants needed to accomplish the teaching task were
prepared and provided by the course instructor one week before the teaching exercise. During their VR teaching session, participants
were exposed to various on-task and off-task behaviors from the avatar students, such as asking topic-related questions, chatting, or
throwing paper balls. All avatar student actions were prescribed and therefore the same for all participants in both the class with 10
student avatars and the class with 30 student avatars. Immediately after their teaching experience in the VR classroom, the student
teachers reported on their stress levels in an online questionnaire. In addition, the student teachers handed in a written reflection on
their VR teaching session in the week following the first VR practice session.2 Prior to writing their reflections, the student teachers
received guidance on the three-step reflection process. In this process, they were instructed to describe three relevant classroom
situations (Step 1), evaluate and interpret these situations based on their professional knowledge (Step 2), and outline alternatives for
classroom situations that they evaluated negatively (Step 3). Before writing their reflections, student teachers were given time to
repeatedly watch the video of the VR classroom situation in which they had taught.
2.2. Design of the VR classroom
The VR classroom was designed to resemble an upper secondary school classroom in Germany (e.g., (Wiepke et al., 2021)). It was
set up with five rows and three columns of school desks and chairs with avatar students (see Appendix). Avatar students’ names were
displayed on name tags placed on their desks and had a wide range of physical characteristics, such as skin tone, hairstyle, and
clothing.
Student teachers were randomly assigned to teach a class of either 10 or 30 student avatars when teaching the VR classroom for the
first time and again when teaching it for the second time. The avatar students engaged in a range of behaviors that included both ontask and off-task actions. These actions ranged from constructive activities, such as writing in a notebook, to less productive actions,
such as throwing a paper ball. The avatar students maintained a natural seated posture and a neutral demeanour, occasionally
redirecting their attention by shifting their gaze or adjusting their body orientation in response to the participants’ movements. The
selection of off-task behaviors was based on a compilation of common disruptive behaviors documented in the academic literature
(Borko, 2016; Wolff et al., 2016). All parameters governing the avatar students’ behaviors, including initiation time, duration, spatial
location, and behavior type, were carefully scripted to maintain uniformity across experimental conditions. This ensured that the
behaviors enacted by the avatar students remained consistent between the scenarios with 10 avatar students and those with 30 avatar
students in the classroom.
We were using the HTC VIVE headset which has a resolution of 1080 x 1200 pixels per eye with a 108◦ field of view and a refresh
rate of 90 Hz. The headset was connected to a laptop (Alienware) with a 2.2-GHz Intel Core i7-8750H processor, with 16 GB of RAM,
and a NVIDIA GeForce RTX 2060 with 6 GB of VRAM graphic card, where the VR classroom software was operated. Essentially,
participants could move around in reality while experiencing multisensory feedback in the VR classroom. Previous studies have
confirmed that the technical setup in our VR classroom created an immersive soundscape which student teachers experienced as
realistic and authentic (Wiepke et al., 2019, 2021).
Student teachers were equally distributed across the four conditions (small class size VR1, small class size VR2: 21%; small class size
VR1, large class size VR2: 23%; large class size VR1, small class size VR2: 25%; large class size VR1, large class size VR2: 28%).3 Thus,
some teachers taught under the same conditions twice, while other teachers taught one VR session in the small class size condition and
the other VR session in the large class size condition. Student teachers teaching in the VR classroom were instructed to teach their
lesson as they would in a real classroom i.e., walking around the room and using similar observational and nonverbal behavior when
interacting with the avatar students as they would with non-virtual students.
2.3. Measures
2.3.1. Self-reported stress in the VR classroom
We measured the stress that student teachers experienced during the VR scenario using two items (“How tense did you feel in the
VR classroom?” and “How did you feel emotionally during the VR classroom?”; e.g., Delaney & Brodie, 2000). Items were answered on
a 9-point Likert-type scale ranging from 1 (calm, relaxed, composed) to 9 (tense). The internal consistency was good (αT1 = 0.83; αT2 =
0.85).
2.3.2. Physiological indicator of stress in the VR classroom
We operationalized student teachers’ physiological stress reactions based on their heartrate (beats per minute, BPM). Student
teachers’ BPM was measured using an armband optical HR sensor (Polar OH1) at 0.3s intervals when teaching in the VR classroom.
Prior to starting the VR scenarios, each student teachers’ baseline heartrate was measured at 0.3s intervals while student teachers were
asked to sit quietly and stay still. These baseline measures were used to control for individual differences in cardiovascular activity. We
then aggregated both heartrate measurements during the baseline phase and heartrate measurements during the VR teaching. The
differences between student teachers’ heartrate during teaching and student teachers’ baseline heartrate was used as a physiological
2 Students also reflected on the second VR classroom three months afterwards based on a video recording. Given the time span between this VR
classroom and the written reflection, we didn’t include these written reflections in our analyses. 3 Percentages do not total 100 because of rounding.
A. Westphal et al.
Computers & Education 212 (2024) 104987
6
indicator of student teachers’ stress response.
2.3.3. Self-referential language
We used the R package stringr (Wickham, 2019) to estimate the frequency of subjective and objective first-person singular pronouns. For each written reflection, we computed the percentage of subjective first-person singular pronouns (German: “ich”; English:
“I”) and the percentage of objective first-person singular pronouns (German: “mich,” “mir”; English: “me,” “myself”).4
2.3.4. Manual ratings of focus and depth in written reflections
Two independent raters manually rated the focus (as either student teachers’ own actions, student avatars’ actions, learning
environment, or no focus) and depth (description, evaluating/explaining, or reflecting on alternatives) in student teachers’ written
reflections using the software MAXQDA and following the procedure of Lohse-Bossenz et al. (2019; Kleinknecht & Groschner, ¨ 2016).
Interrater agreement was good (κ = 0.75). For our analyses, we used the coverage percentage, i.e., the number of characters in the
coded segment in relation to the total number of characters in the text.
2.4. Statistical analyses
To test the hypotheses, we conducted regression analyses using the R package MplusAutomation (Hallquist & Wiley, 2018) and
MLR as the estimation method. A manipulation check on our data was conducted by Huang et al. (2022) who probed whether student
teachers experienced more self-reported and physiological stress when teaching a VR classroom in the large-size condition than in the
small-size condition. We initially examined whether student teachers who used more subjective and objective first-person singular
pronouns focused more frequently on themselves rather than on the avatar students in class or on the classroom environment (RQ1). To
do so, we distinguished between student teachers who used subjective first-person singular pronouns with low frequency (i.e., ≤0.5 SD
below M), average frequency (≥0.5 SD below M and ≤0.5 SD above M), and high frequency (≥0.5 SD above M). A multivariate analysis
of variance was conducted with the subjective use of first-person singular pronouns (low, average, or high) as the between-person
factor and the relative frequency of the student teachers’ reflection focus (the student teacher, student avatars, or learning environment) as the dependent variable. The analysis was then repeated for the objective use of first-person singular pronouns. To examine our
second research question, we then regressed subjective and objective first-person singular pronoun use in the written reflections of the
first VR classroom on self-reported stress in these VR sessions (RQ2). We controlled for gender.5 Within this model, we allowed the use
of subjective and objective first-person singular pronouns to correlate. We specified an analogous model with physiological stress as
the predictor. In a second step, we regressed self-reported stress during the second VR classroom teaching session on the subjective and
objective use of first-person singular pronouns while controlling for self-reported stress during the first VR classroom teaching session
(RQ3). Gender and class size in the second VR classroom teaching session were used as covariates. Thus, regression coefficients of
pronoun use and gender indicate whether these variables explain changes in self-reported stress in the second, as compared to the first,
VR classroom teaching session. Analyses were conducted separately for the use of subjective and objective first-person singular
pronouns to probe whether subjective and objective first-person pronouns differentially relate to negative affect (instead of examining
incremental effects). Analogous models were computed for physiological stress. Due to technical difficulties when transferring the
heartrate data onto the storage device, heartrate while teaching in the VR classroom was only available for 40 students. There were
only 29 students for whom heartrate responses were present in both VR classrooms. We examined whether values were missing
completely at random using the MCAR-Test (Little, 1988) implemented in the R package naniar (Tierney et al., 2021). The test yielded
non-significant results (χ2 = 17.5, df = 25, p = 0.864) indicating that the values were missing completely at random. We applied the
full-information maximum-likelihood approach (FIML; Enders, 2001) to obtain appropriate estimates and standard errors.
3. Results
A manipulation check on our data Huang et al., 2022)showed that student teachers experienced more self-reported and physiological stress when teaching a VR class in the large-size condition than in the small-size condition. Descriptive results (Table 1) and a
dependent t-test showed that self-reported stress was greater in the first VR session than in the second VR session, t(54) = 4.89, p <
0.001. In contrast, physiological stress did not differ statistically significantly between the two VR sessions, t(28) = 0.68, p = 0.503.
3.1. Subjective and objective first-person singular pronouns and manual ratings of focus in written reflections
Pertaining to our first research question, we checked whether student teachers with a more frequent use of subjective and objective
first-person singular pronouns focused more often on themselves rather than on the avatar students in class or on the classroom
environment (Fig. 1a and b). A multivariate analysis of variance with the subjective use of first-person singular pronouns (low,
average, or high) as a between-person factor and relative frequency of student teachers’ focus of reflection on their own actions (as
opposed to focus on student avatars or the learning environment) as a dependent variable showed no statistically significant effect, F(6,
4 The objective first-person singular pronoun “meiner” is rarely used and was not considered here—following the suggestion by Tackman et al.
(2019)—because it overlaps with the more frequently used possessive pronoun “meiner”. 5 We did not look for interaction effects between the use of first-person singular pronouns and gender as our sample was small.
A. Westphal et al.
Computers & Education 212 (2024) 104987
7
108) = 0.59, p = 0.74. Similarly, we found no statistically significant difference between students with low, average, or high frequency
of the objective use of first-person singular pronouns on the relative frequency of student teachers’ reflection being focused on their
own actions, F(6, 108) = 1.18, p = 0.32. Thus, high subjective or objective first-person singular pronoun use was not reflected in a
greater focus on student teachers’ own actions in written reflections (as indicated by manual ratings).
3.2. Do student teachers who experience more stress in the VR session use more subjective and objective first-person singular pronouns in
their written reflections?
To answer our second research question regarding the relationship between student teachers’ stress in the VR session and the use of
subjective and objective first-person singular pronouns, we conducted cross-sectional regression analyses. Our results showed that selfreported stress experienced during the first VR teaching session was not statistically significantly associated with the use of subjective
and objective first-person singular pronouns (Table 2). Gender showed a statistically significant association with subjective and
objective pronoun use, indicating that student teachers who identified as male used subjective and objective first-person singular
pronouns more frequently than student teachers who identified as female. In contrast, physiological stress experienced during the first
VR teaching session was statistically and positively associated with both the use of subjective and the use of objective first-person
singular pronouns. Thus, student teachers who experienced greater physiological stress when teaching in the VR classroom used
more subjective or objective first-person singular pronouns in their written reflections. Beyond physiological stress, associations between gender and the use of subjective or objective first-person pronouns were not statistically significant.
3.3. Do student teachers who use more subjective and objective first-person singular pronouns in their written reflections experience a
greater increase in stress in the second VR session?
Concerning our third research question, we examined whether student teachers who used more subjective and objective firstperson singular pronouns in their written reflection experienced a greater increase in stress in the second VR session. When
regressing the self-reported stress that participants experienced during the second VR teaching session, the use of subjective firstperson singular pronouns was not a statistically significant predictor when controlling for self-reported stress in the first VR teaching session, gender and class size during the second VR teaching session (Table 3, Model 3a). The use of objective first-person singular
pronouns emerged as a statistically significant predictor of self-reported stress during the second VR teaching session (Table 3, Model
3b). This was not in line with our cross-sectional findings and we therefore conducted additional analyses, which revealed that the
association between the use of objective first-person singular pronouns and self-reported stress was explained by the level of physiological stress in the first VR teaching session (Table A2 in Appendix). Concerning the covariates, gender and class size were not
statistically significant predictors of self-reported stress in both models (Models 3a-b, Table 3). Associations between self-reported
stress experienced in the first and in the second VR teaching session were statistically significant, indicating that student teachers
who experienced more self-reported stress in the first VR teaching session reported more self-reported stress in the second VR teaching
session.
Associations between physiological stress during the second VR teaching session and the use of subjective first-person singular
pronouns were statistically significant when controlling for physiological stress experienced during the first VR teaching session and
class size in the second VR session (Table 3, Model 4a). Thus, student teachers who used more subjective first-person singular pronouns
in their written reflections of the first VR teaching session experienced a stronger increase in physiological stress during the second VR
teaching session. We found no association between physiological stress during the second VR teaching session and the use of objective
first-person singular pronouns (Table 3, Model 4b). Regarding the covariates, gender was not a statistically significant predictor of
physiological stress in either model, while a greater class size for the second VR classroom was associated with greater physiological
stress (Models 4a-b, Table 3). Student teachers who experienced greater physiological stress in the first VR teaching session exhibited
more intense physiological stress in the second VR teaching session.
4. Discussion
Negative affect can play a crucial role in student teachers’ reflection, but it has barely been studied (Kleinknecht, 2021). The
Table 1
Descriptive statistics.
N M SD Min Max
Psychological stress VR1 58 5.22 1.70 2.00 9.00
Psychological stress VR2 55 3.97 1.78 1.50 9.00
Physiological stress VR1 40 60.36 22.25 21.56 108.38
Physiological stress VR2 46 56.88 24.40 7.86 97.90
Subjective pronouns VR1 59 3.61 2.16 0.00 9.18
Objective pronouns VR1 59 1.13 1.12 0.00 5.10
Note. Physiological stress = difference between heartrate (BPM) while teaching and baseline heartrate (BPM). Subjective pronouns = relative frequency of subjective first-person singular pronouns in written reflection. Objective pronouns = relative frequency of objective first-person singular
pronouns in written reflection. Gender: 0 = female.
A. Westphal et al.
Computers & Education 212 (2024) 104987
8
(caption on next page)
A. Westphal et al.
Computers & Education 212 (2024) 104987
9
present study applied psychological research on the relationship between negative affect and self-focus—which would potentially be
valuable for diagnostic tools, automated feedback systems, and the improvement of emotion regulation—to the context of student
teachers’ reflections. Initially, we looked at whether student teachers who used more subjective and objective first-person singular
pronouns focused more frequently on themselves rather than on the avatar students in the VR class or on the classroom environment as
a whole. In addition, we examined whether student teachers’ negative affect while teaching in a VR classroom—in terms of higher
self-reported stress and heartrate—affected their self-focus in written reflections operationalized via the subjective and objective use of
first-person singular pronouns. We also explored whether the use of subjective and objective first-person singular pronouns relates to
stress in a subsequent teaching session in a VR classroom.
Initially, we found that there were no differences in manual ratings of focus between student teachers with a low, average, or high
subjective or objective use of first-person singular pronouns. This is in line with findings illustrating that individuals can regard their
own actions and thoughts from a more or less immersed or distanced standpoint, which is reflected in their more or less frequent use of
subjective and objective first-person singular pronouns (Kross & Ayduk, 2008; Kross et al., 2014). While one student teacher, for
instance, described classroom disruptions during his lesson from a more immersed perspective, pointing out how it made him feel
(“Some classroom disturbances made me upset […]. The interruptions kept me from getting back to where I started and from finishing
properly.”), another student started his reflection by taking a more distanced perspective, describing his own actions from a
third-person perspective (“There is a recurring loss of the common thread due to class disruptions and the teacher trying to address
every minor disruption.”). Thus, the use of subjective and objective first-person singular pronouns provides incremental information to
manual ratings of focus in written reflections. Future research examining student teachers’ focus when reflecting on their own teaching
could therefore benefit from incorporating different measures of self-focus.
Fig. 1. a Focus in written reflections from content analysis by use of subjective first-person singular pronouns Note. Low use of subjective firstperson singular pronouns ≤0.5 SD below M. Average use of subjective first-person singular pronouns ≥0.5 SD below M and ≤0.5 SD above M.
High use of subjective first-person singular pronouns ≥0.5 SD above M. 1b Focus in written reflections from content analysis by use of objective firstperson singular pronouns.
Table 2
Predicting first-person singular pronoun use in written reflection by self-reported and physiological stress in first VR teaching session.
Subjective pronouns Objective pronouns
β p 95% CI β p 95% CI
Model 1
Intercept 0.74 0.110 [-0.16, 1.65] 0.17 0.700 [-0.68, 1.02]
Self-reported stress 0.21 0.100 [-0.04, 0.45] 0.20 0.160 [-0.08, 0.47]
Gender 0.28* 0.020 [ 0.05, 0.51] 0.23* 0.040 [ 0.01, 0.45]
R2 0.12 0.09
Model 2
Intercept 0.47 0.240 [-0.31, 1.24] − 0.06 0.850 [-0.66, 0.54]
Physiological stress 0.35*** 0.000 [ 0.13, 0.57] 0.32* 0.010 [ 0.09, 0.55]
Gender 0.24 0.030 [0.02, 0.47] 0.20 0.110 [-0.04, 0.44]
R2 0.18 0.14
Note. Coefficients are standardized. Gender: 0 = female.
Table 3
Predicting self-reported and physiological stress in subsequent VR teaching session by use of first-person singular pronouns in written reflection of
previous VR teaching session.
β p 95% CI β p 95% CI
Self-reported stress VR2 Model 3a Model 3b
Intercept 1.38* 0.020 [ 0.24, 2.52] 1.20* 0.040 [ 0.05, 2.34]
Self-rep. stress VR1 0.37*** <0.001 [ 0.12, 0.61] 0.33* 0.010 [ 0.10, 0.57]
Class size in VR2 − 0.16 0.190 [-0.39, 0.08] − 0.10 0.410 [-0.35, 0.14]
Subj. pronouns VR1 0.06 0.650 [-0.22, 0.35]
Obj. pronouns VR1 0.29* 0.030 [ 0.02, 0.55]
Gender 0.07 0.590 [-0.19, 0.34] 0.04 0.770 [-0.20, 0.27]
R2 0.18 0.23
Physiological stress VR2 Model 4a Model 4b
Intercept − 1.29*** <0.001 [-1.89, − 0.69] − 1.17*** <0.001 [-1.74, − 0.60]
Phys. stress VR1 0.26*** <0.001 [ 0.09, 0.42] 0.25*** <0.001 [ 0.09, 0.41]
Class size in VR2 0.83*** <0.001 [ 0.75, 0.92] 0.84*** <0.001 [ 0.75, 0.93]
Subj. pronouns VR1 0.18* 0.040 [0.01, 0.35]
Obj. pronouns VR1 0.17 0.060 [-0.01, 0.35]
Gender 0.05 0.560 [-0.12, 0.22] 0.05 0.510 [-0.11, 0.21]
R2 0.72 0.71
Note. Coefficients are standardized. Gender: 0 = female. Class size: 0 = small.
A. Westphal et al.
Computers & Education 212 (2024) 104987
10
Our results also indicate that student teachers who experienced more physiological stress when teaching in the VR classroom used
more subjective and objective first-person singular pronouns in their written reflections. This finding is consistent with research
showing that negative affective experiences may provoke the use of subjective and objective first-person singular pronouns as a way to
process negative self-relevant information (Berry-Blunt et al., 2021; Klauke et al., 2020; Kowalski, 2000). However, given their
cross-sectional nature, our results could also indicate that student teachers who use less linguistic distancing—i.e., who are more
immersed into a situation, indicated by a greater use of subjective and objective first-person singular pronouns—experience higher
levels of negative affect, which is also in line with previous findings (Shahane et al., 2023). Our study is the first to confirm that this
association can be replicated in the context of student teachers’ reflections on their own teaching. Our results further amplify existing
research by revealing that this relationship generalizes to a physiological indicator of negative affect, namely to individuals’
heartrates.
In terms of gender differences, our results suggested that student teachers who identified as male used more subjective and
objective first-person singular pronouns than student teachers who identified as female. These gender differences disappeared when
controlling for physiological stress experienced during the VR teaching session. The finding that men used subjective and objective
first-person singular pronouns more frequently than women is not in line with related research that shows a greater tendency in women
to ruminatively self-focus when depressed, and thus experiencing high levels of negative affect (Johnson & Whisman, 2013). It has,
however, been suggested that men’s use of subjective and objective first-person singular pronouns is an indicator of high-arousal
negative distress, while women’s use of first-person singular pronouns is driven by low-arousal negative distress (Tackman et al.,
2019). In our study, student teachers were reflecting on a teaching situation that had the potential to elicit high-arousal negative
distress. Indeed, we found that, when controlling for physiological stress experienced during the VR session, gender differences in the
use of subjective and objective first-person singular pronouns disappeared. These results further support the notion that the use of
first-person singular pronouns may be an indicator of different affective experiences in women and men (Fast & Funder, 2010).
In addition, we found that greater use of subjective first-person singular pronouns led to higher physiological stress in the subsequent VR session. Thus, we found some indication that student teachers who took a more distanced perspective experienced reduced
future stress (as suggested by Berry-Blunt et al., 2021; Zimmermann et al., 2016). This is in line with research showing that individuals
who spontaneously use more linguistic distancing when reflecting on negative and positive events report lower levels of stress in these
and in subsequent situations, and overall greater well-being (Shahane et al., 2023). Despite the fact that the use of subjective and
objective first-person singular pronouns may be a strategy that student teachers use to process negative teaching experiences, this
strategy—referred to as rumination—is considered maladaptive (e.g., Mor & Winquist, 2002). Rumination—i.e., the strategy of
regulating negative mood by repeatedly focusing one’s attention on one’s own negative experiences, and the causes and effects
(Nolen-Hoksema, 1991)—has been associated with depression (Hong, 2007), inefficient problem-solving, and lower self-efficacy
(Lyubomirsky et al., 2003; Reindl et al., 2020). Teachers who ruminate more experience higher levels of stress in the classroom
and are more susceptible to burnout (Koˇsir et al., 2015). We assessed naturally occurring differences in the use of subjective and
objective first-person singular pronouns and may therefore have underestimated the benefits of taking a more distanced perspective.
Future studies with greater sample sizes and power should aim to explore whether a similar effect might emerge for objective
first-person pronouns, for which we found a marginally significant p-value.
In contrast, our data did not support the association between self-reported negative affect and the use of subjective and objective
first-person singular pronouns. Berry-Blunt et al. (2021) proposed that some psychometric units, i.e., “facets, nuances, and items” (p. 8)
might capture I-talk better than others; our self-report measure of stress may not have been ideal in this respect. Moreover, the
self-report was assessed after the VR session, which potentially led to lower congruence between physiological and self-reported stress
responses than when self-report is assessed continuously during the stressful situation (Campbell & Ehlert, 2012).
4.1. Pedagogical implications
While reflection can improve student teachers’ professional vision, and is therefore seen as an important tool in teacher education
(e.g., Stürmer et al., 2013; Weber et al., 2020), less is known about its potential to assist adaptive emotion regulation strategies that
teachers need in order to be able to cope with challenging classroom events (Chang, 2009). Reappraising a challenging situation is seen
as an effective strategy (Gross, 2002) by which teachers change how they think about an event and thereby decrease its emotional
impact (Chang, 2009; Gross, 2022). When engaging in reappraisal, teachers may reduce their use of first-person singular pronouns,
indicating their greater psychological distance to a challenging situation (Nook et al., 2020). Our findings indicate that an increased
self-focus in student teachers’ written reflections—as indicated by a more frequent use of subjective first-person singular pronouns—is
associated with greater physiological stress. Automated feedback systems could build on this finding by identifying student teachers
who repeatedly experience elevated stress and are thus at risk for depression and burnout. This could complement feedback on the
quality of their written reflections (Wulff et al., 2022, 2023). We found some indication that taking a more distanced perspective—as
indicated by less frequent use of subjective first-person pronouns—reduces future stress (as suggested by Berry-Blunt et al., 2021;
Shahane et al., 2023; Zimmermann et al., 2016). Practicing taking a more distanced perspective on negative events is not only seen as
an adaptive emotion-regulation strategy (Kross & Ayduk, 2008), it may over time change a student teacher’s tendency to experience
stress (for a similar suggestion outside the context of teacher education, see Berry-Blunt et al., 2021). A structured practice of reappraisal can facilitate adaptive emotion regulation (Christou-Champi et al., 2015). Thus, training student teachers to use more reappraisal and put more distance between themselves and challenging classroom situations when reflecting on their teaching could be a
viable strategy to help them cope with stress. In addition, Ehring (2020) suggests that ruminative thinking, i.e., focusing one’s
attention on one’s own negative experiences, can be transformed into more adaptive information processing by focusing attention on
A. Westphal et al.
Computers & Education 212 (2024) 104987
11
physical reactions and emotions in a specific situation and fostering self-compassion, which has been shown to be incompatible with
ruminative thinking (Watkins, 2016). Combining video-based reflection on one’s own teaching with reappraisal and
mindfulness-based strategies could be a promising way to foster student teachers’ professional vision, as well as their well-being and
stress-resistance.
4.2. Limitations and future research
The current study has some limitations. The sample size of the study was small. Our results should therefore be replicated using a
larger sample, which could more effectively detect small effects. By providing a highly standardized setting, the VR classroom increases the internal validity of our research findings, which may come at the cost of ecological validity. It has been argued that VR
creates a perceptual illusion and “the real power of VR […] [is that] even though you know it is an illusion, this does not change your
perception or your response to it” (Slater, 2018, p. 2). Some features of the VR environment, such as a realistic display of the environment, a smooth display of motion and view changes, and control of behaviors, are seen as essential to increase the likelihood of
optimal learning in VR (Dalgarno & Lee, 2010; Delamarre et al., 2021). Prior studies showed that student teachers perceived our VR
classroom as realistic and authentic Wiepke et al., 2019, 2021. Student teachers trained in our VR classroom showed similar reflection
processes compared to students reflecting on real classroom videos and showed a substantial increase in reflection-related self-efficacy
over time (Richter et al., 2022). Nevertheless, more validation studies are needed to evaluate the transferability of the positive results
of the participation in a VR learning setting to authentic classrooms. Moreover, future studies should investigate whether our findings
are generalizable to non-virtual classroom environments and to in-service teachers. Studies should also incorporate further self-report
measures of negative affect to “identify […] the smaller psychometric units (e.g., trait facets, nuances, and items) that best capture
I-talk” (Berry-Blunt et al., 2021, p. 8). To better understand the temporal dynamics between negative affect and self-focused attention,
a longitudinal study with multiple measurements is necessary. Such a study design could help explain the extent to which
situation-specific and personal characteristics play a role in the interplay between the use of subjective and objective first-person
singular pronouns and negative affect. The extent to which student teachers can be trained to more professionally process negative
events could be explored by experimentally manipulating the ways in which student teachers describe and evaluate negative classroom
events—taking either a distanced or a self-immersed perspective—by prompting student teachers to use distanced vs. self-referential
language or by instructing them to reframe negative events vs. self-immerse in their emotions (see also Nook et al., 2017). Intervention
studies could be a promising approach to help identify how reflection could be implemented in a way that is beneficial for student
teachers’ professional vision and their emotion regulation, without overwhelming them.
5. Conclusions
Thanks to a standardized VR classroom environment, our study is the first to provide evidence that student teachers’ self-focus in
their written reflections is linked to the stress they experience while teaching. Our multimodal assessment of stress—based on student
teachers’ self-reports and heartrate responses—allowed for a differentiated approach to studying emotions in the VR learning environment. Not only can we show that the association between negative affect and self-focus—measured via I-talk—holds in student
teachers’ written reflections on their own teaching, but our study also adds to previous findings by showing that this link can be
generalized to individuals’ heartrates, representing a physiological indicator of negative affect. These results point to the potential that
reflecting on one’s own teaching may have for practicing adaptive emotion regulation strategies in teacher education programs.
Credit author statement
AW: Conceptualization. Formal analysis. Writing – original draft, Reviewing and Editing, ER: Conceptualization. Investigation.
Data curation. Project administration. Writing- Reviewing and Editing, RL: Writing- Reviewing and Editing, YH: Investigation, Project
administration.
Declaration of competing interest
None.
Data availability
The authors do not have permission to share data.
A. Westphal et al.
Computers & Education 212 (2024) 104987
12
APPENDIX
Table A.1
Descriptive statistics for baseline heartrate and heartrate in VR teaching session
N M SD Min Max
Baseline BPM T1 42 96.92 30.88 44.41 173.62
Baseline BPM T2 50 102.37 30.03 40.34 175.88
BPM in VR session T1 57 161.14 20.44 125.02 196.45
BPM in VR session T2 55 160.65 20.00 115.57 196.59
Note. Measures used to compute physiological stress, i.e., difference in heartrate between baseline and VR session.
Table A.2
Predicting self-reported stress in subsequent VR teaching session by physiological stress and use of firstperson singular pronouns in written reflection of previous VR teaching session
β p 95% CI
Self-reported stress VR2 Model 5a
Intercept 0.39 0.560 [-0.93, 1.71]
Self-reported stress VR1 0.31* 0.010 [ 0.06, 0.55]
Physiological stress VR1 0.29* 0.040 [ 0.01, 0.57]
Class size in VR2 − 0.06 0.640 [-0.31, 0.19]
Objective pronouns VR1 0.24 0.110 [-0.05, 0.54]
Gender 0.06 0.620 [-0.18, 0.30]
R2 0.28
Note. Coefficients are standardized. Gender: 0 = female. Class size: 0 = small.
Fig. A1. VR Classroom from the perspective of student teachers.
A. Westphal et al.
Computers & Education 212 (2024) 104987
13
Fig. A.2. Student teacher teaching in VR Classroom.
References
Bernard, J. D., Baddeley, J. L., Rodriguez, B. F., & Burke, P. A. (2016). Depression, language, and affect: An examination of the influence of baseline depression and
affect induction on language. Journal of Language and Social Psychology, 35(3), 317–326. https://doi.org/10.1177/0261927X15589186
Berry-Blunt, A. K., Holtzman, S. H., Donnellan, M. B., & Mehl, M. R. (2021). The story of “I” tracking: Psychological implications of self-referential language use. Social
and Personality Psychology Compass, 15(2), Article e12647. https://doi.org/10.1111/spc3.12647
Borko, H. (2004). Professional development and teacher learning: Mapping the terrain. Educational Researcher, 33(8), 3–15, 10.3102%2F0013189X033008003.
Borko, H. (2016). Methodological contributions to video-based studies of classroom teaching and learning: A commentary. ZDM, 48(1), 213–218, 10/gfs2vk.
Borko, H., Jacobs, J., Eiteljorg, E., & Pittman, M. E. (2008). Video as a tool for fostering productive discussions in mathematics professional development. Teaching and
Teacher Education, 24(2), 417–436. https://doi.org/10.1016/j.tate.2006.11.012
Campbell, J., & Ehlert, U. (2012). Acute psychosocial stress: Does the emotional stress response correspond with physiological responses? Psychoneuroendocrinology,
37(8), 1111–1134. https://doi.org/10.1016/j.psyneuen.2011.12.010
Chang, M. L. (2009). An appraisal perspective of teacher burnout: Examining the emotional work of teachers. Educational Psychology Review, 21, 193–218. https://doi.
org/10.1007/s10648-009-9106-y
Chernikova, O., Heitzmann, N., Fink, M. C., Timothy, V., Seidel, T., Fischer, F., & DFG Research group COSIMA.. (2020). Facilitating diagnostic competences in higher
education - a meta-analysis in medical and teacher education. Educational Psychology Review, 32, 157–196. https://doi.org/10.1007/s10648-019-09492-2
Christou-Champi, S., Farrow, T. F., & Webb, T. L. (2015). Automatic control of negative emotions: Evidence that structured practice increases the efficiency of emotion
regulation. Cognition & Emotion, 29(2), 319–331. https://doi.org/10.1080/02699931.2014.901213
Dalgarno, B., & Lee, M. J. (2010). What are the learning affordances of 3-D virtual environments? British Journal of Educational Technology, 41(1), 10–32. https://doi.
org/10.14221/ajte.2016v41n1.8
Delamarre, A., Shernoff, E., Buche, C., Frazier, S., Gabbard, J., & Lisetti, C. (2021). The interactive virtual training for teachers (IVT-T) to practice classroom behavior
management. International Journal of Human-Computer Studies, 152, Article 102646. https://doi.org/10.1016/j.ijhcs.2021.102646
Delaney, J. P. A., & Brodie, D. A. (2000). Effects of short-term psychological stress on the time and frequency domains of heart-rate variability. Perceptual and Motor
Skills, 91(2), 515–524. https://doi.org/10.2466/pms.2000.91.2.515
Dunnack, E. S., & Park, C. L. (2009). The effect of an expressive writing intervention on pronouns: The surprising case of I. Journal of Loss & Trauma, 14(6), 436–446.
https://doi.org/10.1080/15325020902925084
Edwards, T., & Holtzman, N. S. (2017). A meta-analysis of correlations between depression and first singular pronoun use. Journal of Research in Personality, 68, 63–68.
https://doi.org/10.1016/j.jrp.2017.02.005
Ehring, T. (2020). Die ruminationsfokussierte Kognitive Verhaltenstherapie [Rumination-focused cognitive-Behavioral Therapy]. Zeitschrift für Psychiatrie, Psychologie
und Psychotherapie, 68(3), 150–159. https://doi.org/10.1024/1661-4747/a000414
Enders, C. K. (2001). The impact of nonnormality on full information maximum-likelihood estimation for structural equation models with missing data. Psychological
Methods, 6, 352–370. https://doi.org/10.1037/1082-989X.6.4.352
Exner, J. E., Jr. (1973). The self focus sentence completion: A study of egocentricity. Journal of Personality Assessment, 37(5), 437–455. https://doi.org/10.1080/
00223891.1973.10119902
Fast, L. A., & Funder, D. C. (2010). Gender differences in the correlates of self-referent word use: Authority, entitlement, and depressive symptoms. Journal of
Personality, 78, 313–338. https://doi.org/10.1111/j.1467-6494.2009.00617.x
Fenigstein, A., Scheier, M. F., & Buss, A. H. (1975). Public and private self-consciousness: Assessment and theory. Journal of Consulting and Clinical Psychology, 43(4),
522–527. https://doi.org/10.1037/h0076760
Goddard, R., O’brien, P., & Goddard, M. (2006). Work environment predictors of beginning teacher burnout. British Educational Research Journal, 32(6), 857–874.
https://doi.org/10.1080/01411920600989511Gold,B.,&Windscheid. J. (2020). Observing 360-degree classroom videos – Effects of video type on presence,
emotions, workload, classroom observations, and ratings of teaching quality. Computers & Education, 156, 103960.
Gold, B., & Windscheid, J. (2020). Observing 360-degree classroom videos–Effects of video type on presence, emotions, workload, classroom observations, and ratings
of teaching quality. Computers & Education, 156, 103960. https://doi.org/10.1016/j.compedu.2020.103960.
Gross, J. J. (2002). Emotion regulation: Affective, cognitive, and social consequences. Psychophysiology, 39(3), 281–291, 10.1017.S0048577201393198.
Hallquist, M. N., & Wiley, J. F. (2018). MplusAutomation: An R package for facilitating large-scale latent variable analyses in Mplus. Structural Equation Modeling: A
Multidisciplinary Journal, 25(4), 621–638. https://doi.org/10.1080/10705511.2017.1402334
Hong, R. Y. (2007). Worry and rumination: Differential associations with anxious and depressive symptoms and coping behavior. Behavior Research and Therapy, 45
(2), 277–290. https://doi.org/10.1016/j.brat.2006.03.006
Huang, Y., Richter, E., Kleickmann, T., & Richter, D. (2022). Class size affects preservice teachers’ physiological and psychological stress reactions: An experiment in a
virtual reality classroom. Computers & Education, 184, 104503. https://doi.org/10.1016/j.compedu.2022.104503.
Huang, Y., Richter, E., Kleickmann, T., Wiepke, A., & Richter, D. (2021). Classroom complexity affects student teachers’ behavior in a VR classroom. Computers &
Education, 163, 104100. https://doi.org/10.1016/j.compedu.2020.104100.
Hultell, D., Melin, B., & Gustavsson, J. P. (2013). Getting personal with teacher burnout: A longitudinal study on the development of burnout using a person-based
approach. Teaching and Teacher Education, 32, 75–86. https://doi.org/10.1016/j.tate.2013.01.007
James, W. (1890). The principles of psychology. New York, NY: Holt & Co.
Johnson, D. P., & Whisman, M. A. (2013). Gender differences in rumination: A meta-analysis. Personality and Individual Differences, 55, 367–374. https://doi.org/
10.1016/j.paid.2013.03.019
Ke, F., & Xu, X. (2020). Virtual reality simulation-based learning of teaching with alternative perspectives taking. British Journal of Educational Technology, 51(6),
2544–2557. https://doi.org/10.1111/bjet.12936
A. Westphal et al.
Computers & Education 212 (2024) 104987
14
Kern, M. L., Eichstaedt, J. C., Schwartz, H. A., Dziurzynski, L., Ungar, L. H., Stillwell, D. J., Kosinski, M., Ramones, S. M., & Seligman, M. E. P. (2014). The online social
self: An open vocabulary approach to personality. Assessment, 21(2), 158–169. https://doi.org/10.1177/1073191113514104
Klauke, F., Müller-Frommeyer, L. C., & Kauffeld, S. (2020). Writing about the silence: Identifying the language of ostracism. Journal of Language and Social Psychology,
39(5–6), 751–763. https://doi.org/10.1177/0261927X19884599
Kleinknecht, M. (2021). Emotionen von Lehrkraften ¨ in unterrichtsvideobasierten Fortbildungen [Teachers’ emotions in video-based training]. In M. Gl¨
aser-Zikuda,
F. Hofmann, & V. Frederking (Eds.), Emotionen im Unterricht: Psychologische, padagogische ¨ und fachdidaktische Perspektiven [Emotions in the classroom: Psychological,
pedagogical and didactical perspectives] (pp. 231–243). Stuttgart, Germany: Kohlhammer Verlag, 2021.
Kleinknecht, M., & Groschner, ¨ A. (2016). Fostering preservice teachers’ noticing with structured video feedback: Results of an online- and video-based intervention
study. Teaching and Teacher Education, 59, 45–56. https://doi.org/10.1016/j.tate.2016.05.020
Kleinknecht, M., & Schneider, J. (2013). What do teachers think and feel when analyzing videos of themselves and other teachers teaching? Teaching and Teacher
Education, 33, 13–23. https://doi.org/10.1016/j.tate.2013.02.002
Koˇsir, K., Tement, S., Licardo, M., & Habe, K. (2015). Two sides of the same coin? The role of rumination and reflection in elementary school teachers’ classroom stress
and burnout. Teaching and Teacher Education, 47, 131–141.
Kowalski, R. M. (2000). “I was only kidding!” Victims’ and perpetrators’ perceptions of teasing. Personality and Social Psychology Bulletin, 26, 231–241. https://doi.
org/10.1177/0146167200264009
Kross, E., & Ayduk, O. (2008). Facilitating adaptive emotional analysis: Distinguishing distanced-analysis of depressive experiences from immersed-analysis and
distraction. Personality and Social Psychology Bulletin, 34(7), 924–938, 10.1177%2F0146167208315938.
Kross, E., Bruehlman-Senecal, E., Park, J., Burson, A., Dougherty, A., Shablack, H., Bremner, R., Moser, J., & Ayduk, O. (2014). Self-talk as a regulatory mechanism:
How you do it matters. Journal of Personality and Social Psychology, 106(2), 304–324. https://doi.org/10.1037/a0035173
Lin, Y. C. (2023). Using virtual classroom simulations in a mathematics methods course to develop pre-service primary mathematics teachers’ noticing skills. British
Journal of Educational Technology, 54(3), 734–753. https://doi.org/10.1111/bjet.13291
Little, R. J. A. (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83(404),
1198–1202. https://doi.org/10.2307/2290157
Lohse-Bossenz, H., Schonknecht, ¨ L., & Brandtner, M. (2019). Entwicklung und Validierung eines Fragebogens zur Erfassung Reflexionsbezogener Selbstwirksamkeit
von Lehrkraften ¨ im Vorbereitungsdienst [Development and validation of a questionnaire assessing pre-service teachers’ self-efficacy in reflection]. Empirische
Padagogik, ¨ 33(2), 164–179.
Lugrin, J.-L., Latoschik, M. E., Habel, M., Roth, D., Seufert, C., & Grafe, S. (2016). Breaking bad behaviors: A new tool for learning classroom management using
virtual reality. Frontiers in ICT, 3(26). https://doi.org/10.3389/fict.2016.00026
Lyubomirsky, S., Kasri, F., & Zehm, K. (2003). Dysphoric rumination impairs concentration on academic tasks. Cognitive Therapy and Research, 27(3), 309–330.
https://doi.org/10.1023/A:1023918517378
Mehl, M. R., Gosling, S. D., & Pennebaker, J. W. (2006). Personality in its natural habitat: Manifestations and implicit folk theories of personality in daily life. Journal
of Personality and Social Psychology, 90(5), 862–877. https://doi.org/10.1037/0022-3514.90.5.862
Mor, N., & Winquist, J. (2002). Self-focused attention and negative affect: A meta-analysis. Psychological Bulletin, 128(4), 638–662. https://doi.org/10.1037/0033-
2909.128.4.638
Nolen-Hoeksema, S. (1991). Responses to depression and their effects on the duration of depressive episodes. Journal of Abnormal Psychology, 100(4), 569–582.
https://doi.org/10.1037/0021-843X.100.4.569
Nook, E. C., Schleider, J. L., & Somerville, L. H. (2017). A linguistic signature of psychological distancing in emotion regulation. Journal of Experimental Psychology:
General, 146(3), 337–346. https://doi.org/10.1037/xge0000263
Nook, E. C., Vidal Bustamante, C. M., Cho, H. Y., & Somerville, L. H. (2020). Use of linguistic distancing and cognitive reappraisal strategies during emotion regulation
in children, adolescents, and young adults. Emotion, 20(4), 525. https://doi.org/10.1037/emo0000570
Pendergast, D., O’Brien, M., Prestridge, S., & Exley, B. (2022). Self-efficacy in a 3-dimensional virtual reality classroom—initial teacher education students’
experiences. Education Sciences, 12(6), 368. https://doi.org/10.3390/educsci12060368
Prilop, C. N., Weber, K. E., & Kleinknecht, M. (2021). The role of expert feedback in the development of pre-service teachers’ professional vision of classroom
management in an online blended learning environment. Teaching and Teacher Education, 99, Article 103276. https://doi.org/10.1016/j.tate.2020.103276
Reindl, M., Tulis, M., & Dresel, M. (2020). Profiles of emotional and motivational self-regulation following errors: Associations with learning. Learning and Individual
Differences, 77, Article 101806. https://doi.org/10.1016/j.lindif.2019.101806
Remacle, A., Bouchard, S., & Morsomme, D. (2023). Can teaching simulations in a virtual classroom help trainee teachers to develop oral communication skills and
self-efficacy? A randomized controlled trial. Computers & Education, 200, Article 104808. https://doi.org/10.1016/j.compedu.2023.104808
Richter, E., Hußner, I., Huang, Y., Richter, D., & Lazarides, R. (2022). Video-based reflection in teacher education: Comparing virtual reality and real classroom videos.
Computers & Education, 24(3), Article 104601. https://doi.org/10.1016/j.compedu.2022.104601
Rude, S., Gortner, E.-M., & Pennebaker, J. W. (2004). Language use of depressed and depression-vulnerable college students. Cognition & Emotion, 18(8), 1121–1133.
https://doi.org/10.1080/02699930441000030
Seidel, T., Stürmer, K., Blomberg, G., Kobarg, M., & Schwindt, K. (2011). Teacher learning from analysis of videotaped classroom situations: Does it make a difference
whether teachers observe their own teaching or that of others? Teaching and Teacher Education, 27(2), 259–267. https://doi.org/10.1016/j.tate.2010.08.009
Seufert, C., Oberdorfer, ¨ S., Roth, A., Grafe, S., Lugrin, J. L., & Latoschik, M. E. (2022). Classroom management competency enhancement for student teachers using a
fully immersive virtual classroom. Computers & Education, 179, Article 104410. https://doi.org/10.1016/j.compedu.2021.104410
Shahane, A. D., Godfrey, D. A., & Denny, B. T. (2023). Predicting real-world emotion and health from spontaneously assessed linguistic distancing using novel scalable
technology. Emotion. Advance online publication. https://doi.org/10.1037/emo0001211
Stürmer, K., Konings, ¨ K. D., & Seidel, T. (2013). Declarative knowledge and professional vision in teacher education: Effect of courses in teaching and learning. British
Journal of Educational Psychology, 83(3), 467–483. https://doi.org/10.1111/j.2044-8279.2012.02075.x
Tackman, A. M., Sbarra, D. A., Carey, A. L., Donnellan, M. B., Horn, A. B., Holtzman, N. S., Edwards, T. S., Pennebaker, J. W., & Mehl, M. R. (2019). Depression,
negative emotionality, and self-referential language: A multi-lab, multi-measure, and multi-language-task research synthesis. Journal of Personality and Social
Psychology, 116(5), 817–834. https://doi.org/10.1037/pspp0000187
Tierney, N., Cook, D., McBain, M., & Fay, C. (2021). naniar: Data structures, summaries, and visualisations for missing data. R package version 0.6.1 https://CRAN.Rproject.org/package=naniarv.
van Es, E. A., & Sherin, M. G. (2002). Learning to notice: Scaffolding new teachers’ interpretations of classroom interactions. Journal of Technology and Teacher
Education, 10(4), 571–596. https://www.learntechlib.org/primary/p/9171/.
van Es, E. A., & Sherin, M. G. (2008). Mathematics teachers’ “learning to notice” in the context of a video club. Teaching and Teacher Education, 24(2), 244–276.
https://doi.org/10.1016/j.tate.2006.11.005
Voss, T., & Kunter, M. (2020). “Reality shock” of beginning teachers? Changes in teacher candidates’ emotional exhaustion and constructivist-oriented beliefs. Journal
of Teacher Education, 71(3), 292–306. https://doi.org/10.1177/0022487119839700
Watkins, E. R. (2016). Rumination-focused cognitive-behavioral therapy for depression. New York: Guilford.
Weber, K. E., Prilop, C. N., Viehoff, S., Gold, B., & Kleinknecht, M. (2020). Fordert ¨ eine videobasierte intervention im praktikum die professionelle wahrnehmung von
Klassenführung?—eine quantitativ-inhaltsanalytische messung von Subprozessen professioneller wahrnehmung [does a video-based practicum intervention
provide a realistic picture of classroom management? A quantitative content analysis of the subprocesses of professional awareness]. Zeitschrift für
Erziehungswissenschaft, 23, 343–365. https://doi.org/10.1007/s11618-020-00939-9
Wegner, D. M., & Giuliano, T. (1980). Arousal-induced attention to self. Journal of Personality and Social Psychology, 38(5), 719–726. https://doi.org/10.1037/0022-
3514.38.5.719
A. Westphal et al.
Computers & Education 212 (2024) 104987
15
Westphal, A., Kalinowski, E., Hoferichter, C. J., & Vock, M. (2022). K− 12 teachers’ stress and burnout during the COVID-19 pandemic: A systematic review. Frontiers
in psychology, 13, 920326. https://doi.org/10.3389/fpsyg.2022.920326.
Wickham, H. (2019). stringr: Simple, consistent wrappers for common string operations. R package version 1.4.0. https://CRAN.R-project.org/package=stringr.
Wiepke, A., Heinemann, B., Lucke, U., & Schroeder, U. (2021). Jenseits des eigenen Klassenzimmers: Perspektiven & Weiterentwicklungen des VR-Klassenzimmers
[Beyond the own classroom: Perspectives & further developments of the VR classroom.]. DELFI 2021 - Die 19. Fachtagung Bildungstechnologien [Educational
technologies symposium], 331–336. http://dl.gi.de/handle/20.500.12116/37031.
Wolff, C. E., Jarodzka, H., van den Bogert, N., & Boshuizen, H. P. A. (2016). Teacher vision: Expert and novice teachers’ perception of problematic classroom
management scenes. Instructional Science, 44(3), 243–265, 10/f8tcpm.
Wulff, P., Westphal, A., Mientus, L., Nowak, A., & Borowski, A. (2023, January). Enhancing writing analytics in science education research with machine learning and
natural language processing—Formative assessment of science and non-science preservice teachers’ written reflections. In Frontiers in Education, 7, Article
1061461. https://doi.org/10.3389/feduc.2022.1061461. Frontiers.
Wulff, P., Buschhüter, D., Westphal, A., Mientus, L., Nowak, A., & Borowski, A. (2022). Bridging the gap between qualitative and quantitative assessment in science
education research with machine learning—A case for pretrained language models-based clustering. Journal of Science Education and Technology, 31(4), 490–513.
https://doi.org/10.1007/s10956-022-09969-w.
Yarkoni, T. (2010). Personality in 100,000 words: A large-scale analysis of personality and word use among bloggers. Journal of Research in Personality, 44(3),
363–373. https://doi.org/10.1016/j.jrp.2010.04.001
Yee, N., Harris, H., Jabon, M., & Bailenson, J. N. (2011). The expression of personality in virtual worlds. Social Psychological and Personality Science, 2(1), 5–12.
https://doi.org/10.1177/1948550610379056
Zhang, M., Lundeberg, M., Koehler, M. J., & Eberhardt, J. (2011). Understanding affordances and challenges of three types of video for teacher professional
development. Teaching and Teacher Education, 27(2), 454–462. https://doi.org/10.1016/j.tate.2010.09.015
Zimmermann, J., Brockmeyer, T., Hunn, M., Schauenburg, H., & Wolf, M. (2016). First-person pronoun use in spoken language as a predictor of future depressive
symptoms: Preliminary evidence from a clinical sample of depressed patients. Clinical Psychology & Psychotherapy, 24(2), 384–391. https://doi.org/10.1002/
cpp.2006
Wiepke, A., Richter, E., Zender, R., & Richter, D. (2019). Einsatz von Virtual Reality zum Aufbau von Klassenmanagement-Kompetenzen im Lehramtsstudium [Use of
virtual reality for training student teachers’ classroom management competencies]. DELFI, 2019. https://doi.org/10.18420/delfi2019_319
A. Westphal et al.
View publication stats |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | I'm making a presentation to first-year biology students based on this article. Please summarize this excerpt. Be sure to use language that an 11th grader can understand (but define jargon so that students learn some relevant terminology). | 1 Introduction
Cervical squamous intraepithelial lesion (SIL) is a condition characterized by abnormal changes in cervical squamous cells. Although most low-grade squamous intraepithelial lesions (LSILs) can regress naturally within 1–2 years, while high-grade squamous intraepithelial lesions (HSILs) have a higher potential for malignant transformation. Current treatment methods for SIL comprise ablative treatments (such as cryotherapy, radiofrequency, and focused ultrasound treatments) and excision procedures (including cold knife conization, cervical loop electrosurgical excision procedure, and laser conization). In the past few years, there has been an increase in the occurrence of SIL among younger woman. As a result, clinicians and patients are not only concerned about lesion clearance, but also paying attention to cervical wound healing, which has emerged as a new area of interest.
SARS-CoV-2, a novel coronavirus, emerged in 2019 and caused a global pandemic of acute respiratory illness named SARS-CoV-2. With the deepening of research, it has been found that SARS-CoV-2 not only affects the respiratory system, but also the digestive system, nervous system, cardiovascular system, endocrine system, reproductive system, and can cause a decrease in the body’s immunity, leading to secondary bacterial infections. However, little is known about the impact of SARS-CoV-2 infection on cervical wound healing after treatment. In this study, we investigated the wound healing status of patients who were infected with SARS-CoV-2 within one month after cervical treatment and compared them with a control group of patients who underwent cervical treatment after the disappearance of infection symptoms.
2 Materials and methods
2.1 Study population
A total of 60 patients, aged 19 to 53 years old, who underwent cervical treatment for SILs at the gynecology cervical clinic of the People’s Hospital of Guangxi Zhuang Autonomous Region from November 2022 to February 2023 were recruited as the study population. Informed consent was obtained from all subjects involved in the study. The inclusion criteria for patients were a diagnosis of SIL using the three-step method of cervical cytology, which including HPV test, colposcopy, and cervical biopsy.
The exclusion criteria included: patients with acute or subacute genital infections, genital malignancy, a history of hysterectomy or pelvic radiotherapy, a history of cervical excision, cervical ablation, or medication treatment, pregnant or lactating women, uncontrolled diabetes, hyperthyroidism or hypothyroidism, severe cardiovascular, cerebral, pulmonary, hepatic, or renal dysfunction, or comorbidities of immunological disorders or immunosuppressive drug use.
The experimental group consisted of 29 patients who exhibited symptoms (such as fever, sore throat, and cough) and were confirmed to have SARS-CoV-2 infection through antigen testing within one month of receiving cervical treatment. The control group comprised t31 patients who underwent cervical treatment at least one week after their SARS-CoV-2 symptoms had resolved. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Guangxi Zhuang Autonomous Region People’s Hospital (no. KY-KJT02023-10) on August 01, 2022.
2.2 Treatment methods
All cervical treatments were performed 3–7 days after the end of menstruation, excluding patients who have reached menopause. To assess the extent of the cervical lesion, acetic acid and iodine are applied to stain the junction of the cervical squamous epithelium and columnar epithelium. Local anesthesia with 1% lidocaine is administered at the 4 quadrants of the cervix. The cervical SILs are treated using either loop electrosurgical excision procedure (LEEP) or ablative treatment such as focused ultrasound or radiofrequency ablation. Details regarding the indications for patients undergoing LEEP excision or ablative treatment can be found in the Supplementary materials.
The (LEEP) is performed using a triangular-shaped electrosurgical knife with a length of 15–20 mm, The treatment area is set to approximately 5 mm from the outer edge of the lesion, utilizing a cutting-coagulation mode with a power setting of 40 W. Depending on the type of transformation zone, different lengths of lesion tissue are removed (7–10 mm for type 1, 10–15 mm for type 2, and 15–25 mm for type 3). Following excision, a ball-shaped electrode is used to perform electrocoagulation for hemostasis at the surgical site.
The power setting for focused ultrasound therapy is between 3.5 and 4.5 W. Treatment is performed in a circular scanning pattern from the lesion area toward the normal area. During the treatment, the focused ultrasound probe is kept in close contact with the treatment area. The treatment should be stopped when local tissues become concave or hardened. The treatment range should extend beyond the edge of the cervical lesion by approximately 2–5 mm. The power for radiofrequency ablation is set to 30 W. The treatment area extends beyond the area that tests positive in acetic acid and iodine tests by 2–5 mm. An auto-coagulator knife is used to ablate the cervical epithelium from the inside out, until the epithelium is thermocoagulated to a light yellow color and the wound has formed a shallow cone shape.
2.3 Cervical wound healing evaluation after treatment
Assessing the rate of wound healing is a key factor in evaluating the progress of wound recovery. A comprehensive evaluation of the healing process includes recording the cervical wound condition immediately after treatment and at a specified time-point after treatment. The wound area is measured using Imagine J software (National Institutes of Health, Bethesda, MD). To calculate the wound healing rate, the formula [(treatment wound area - remaining wound area)/treatment wound area] × 100% is applied. This approach enables clinicians to quantitatively assess the healing process and monitor the progress of wound closure over time. Specifically, the calculation is performed on the 30th day after treatment, providing a comprehensive evaluation of healing rate.
2.4 Vaginal discharge test before and one month after treatment
Before undergoing cervical treatment, each patient underwent a vaginal discharge examination to rule out vaginal inflammation. During the one-month follow-up vaginal colposcopy after receiving cervical treatment, another examination of vaginal discharge was conducted to assess the vaginal microbiota and inflammatory condition.
4 Results
There were no significant differences between the two groups in terms of age, disease severity, treatment methods, or treatment duration. The mean time of SARS-CoV-2 infection in post-treatment infection group is 15.83 ± 9.74 days after cervical treatment, and the mean time of SARS-CoV-2 infection in treatment after infection recovered group is 39.13 ± 9.80 days before cervical treatment.
Compared with the control group, the experimental group had a lower wound healing rate 83.77 (62.04, 97.09) % vs. 98.64(97.10, 99.46)%, p < 0.001. The Box and whisker plot of the wound healing rate for the two group was shown in Supplementary Figure 1. Also the experimental group had a higher scab non-shedding rate 24.14% (7/29) on the 30th day after treatment when compared with control group (24.14% vs. 3.22%, p = 0.024).
We conducted further analysis to explore the potential correlation between delayed wound healing and the timing of SARS-CoV-2 infection in the experimental group. Out of the 7 patients who experienced delayed wound healing, 5 patients (71.43%) contracted SARS-CoV-2 within 2 weeks after undergoing cervical treatment, only 2 (2/7 or 28.57%) patients infected with delayed wound healing infected SARS-CoV-2 2 weeks after cervical treatment. It is worth noting that in the control group, there was only 1 patient who experienced poor wound healing, and the cervical treatment of this patient was conducted 45 days after SARS-CoV-2 infection.
Pre-treatment vaginal discharge tests for all patients exhibited normal levels of white blood cell counts and leukocyte esterase, with no detection of trichomonas vaginalis, pseudohyphae, or budding spores in both pre and post-treatment assessments. We compared the correlation between white blood cell count and leukocyte esterase in vaginal discharge with cervical healing. The results of the chi-square test for contingency table revealed no significant correlation between white blood cell count or leukocyte esterase in vaginal discharge and delayed wound healing of the cervix (defined as the non-shedding of scabs after 1 month of treatment) (p = 0.947 and 0.970, respectively). | [question]
I'm making a presentation to first-year biology students based on this article. Please summarize this excerpt. Be sure to use language that an 11th grader can understand (but define jargon so that students learn some relevant terminology).
=====================
[text]
1 Introduction
Cervical squamous intraepithelial lesion (SIL) is a condition characterized by abnormal changes in cervical squamous cells. Although most low-grade squamous intraepithelial lesions (LSILs) can regress naturally within 1–2 years, while high-grade squamous intraepithelial lesions (HSILs) have a higher potential for malignant transformation. Current treatment methods for SIL comprise ablative treatments (such as cryotherapy, radiofrequency, and focused ultrasound treatments) and excision procedures (including cold knife conization, cervical loop electrosurgical excision procedure, and laser conization). In the past few years, there has been an increase in the occurrence of SIL among younger woman. As a result, clinicians and patients are not only concerned about lesion clearance, but also paying attention to cervical wound healing, which has emerged as a new area of interest.
SARS-CoV-2, a novel coronavirus, emerged in 2019 and caused a global pandemic of acute respiratory illness named SARS-CoV-2. With the deepening of research, it has been found that SARS-CoV-2 not only affects the respiratory system, but also the digestive system, nervous system, cardiovascular system, endocrine system, reproductive system, and can cause a decrease in the body’s immunity, leading to secondary bacterial infections. However, little is known about the impact of SARS-CoV-2 infection on cervical wound healing after treatment. In this study, we investigated the wound healing status of patients who were infected with SARS-CoV-2 within one month after cervical treatment and compared them with a control group of patients who underwent cervical treatment after the disappearance of infection symptoms.
2 Materials and methods
2.1 Study population
A total of 60 patients, aged 19 to 53 years old, who underwent cervical treatment for SILs at the gynecology cervical clinic of the People’s Hospital of Guangxi Zhuang Autonomous Region from November 2022 to February 2023 were recruited as the study population. Informed consent was obtained from all subjects involved in the study. The inclusion criteria for patients were a diagnosis of SIL using the three-step method of cervical cytology, which including HPV test, colposcopy, and cervical biopsy.
The exclusion criteria included: patients with acute or subacute genital infections, genital malignancy, a history of hysterectomy or pelvic radiotherapy, a history of cervical excision, cervical ablation, or medication treatment, pregnant or lactating women, uncontrolled diabetes, hyperthyroidism or hypothyroidism, severe cardiovascular, cerebral, pulmonary, hepatic, or renal dysfunction, or comorbidities of immunological disorders or immunosuppressive drug use.
The experimental group consisted of 29 patients who exhibited symptoms (such as fever, sore throat, and cough) and were confirmed to have SARS-CoV-2 infection through antigen testing within one month of receiving cervical treatment. The control group comprised t31 patients who underwent cervical treatment at least one week after their SARS-CoV-2 symptoms had resolved. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Guangxi Zhuang Autonomous Region People’s Hospital (no. KY-KJT02023-10) on August 01, 2022.
2.2 Treatment methods
All cervical treatments were performed 3–7 days after the end of menstruation, excluding patients who have reached menopause. To assess the extent of the cervical lesion, acetic acid and iodine are applied to stain the junction of the cervical squamous epithelium and columnar epithelium. Local anesthesia with 1% lidocaine is administered at the 4 quadrants of the cervix. The cervical SILs are treated using either loop electrosurgical excision procedure (LEEP) or ablative treatment such as focused ultrasound or radiofrequency ablation. Details regarding the indications for patients undergoing LEEP excision or ablative treatment can be found in the Supplementary materials.
The (LEEP) is performed using a triangular-shaped electrosurgical knife with a length of 15–20 mm, The treatment area is set to approximately 5 mm from the outer edge of the lesion, utilizing a cutting-coagulation mode with a power setting of 40 W. Depending on the type of transformation zone, different lengths of lesion tissue are removed (7–10 mm for type 1, 10–15 mm for type 2, and 15–25 mm for type 3). Following excision, a ball-shaped electrode is used to perform electrocoagulation for hemostasis at the surgical site.
The power setting for focused ultrasound therapy is between 3.5 and 4.5 W. Treatment is performed in a circular scanning pattern from the lesion area toward the normal area. During the treatment, the focused ultrasound probe is kept in close contact with the treatment area. The treatment should be stopped when local tissues become concave or hardened. The treatment range should extend beyond the edge of the cervical lesion by approximately 2–5 mm. The power for radiofrequency ablation is set to 30 W. The treatment area extends beyond the area that tests positive in acetic acid and iodine tests by 2–5 mm. An auto-coagulator knife is used to ablate the cervical epithelium from the inside out, until the epithelium is thermocoagulated to a light yellow color and the wound has formed a shallow cone shape.
2.3 Cervical wound healing evaluation after treatment
Assessing the rate of wound healing is a key factor in evaluating the progress of wound recovery. A comprehensive evaluation of the healing process includes recording the cervical wound condition immediately after treatment and at a specified time-point after treatment. The wound area is measured using Imagine J software (National Institutes of Health, Bethesda, MD). To calculate the wound healing rate, the formula [(treatment wound area - remaining wound area)/treatment wound area] × 100% is applied. This approach enables clinicians to quantitatively assess the healing process and monitor the progress of wound closure over time. Specifically, the calculation is performed on the 30th day after treatment, providing a comprehensive evaluation of healing rate.
2.4 Vaginal discharge test before and one month after treatment
Before undergoing cervical treatment, each patient underwent a vaginal discharge examination to rule out vaginal inflammation. During the one-month follow-up vaginal colposcopy after receiving cervical treatment, another examination of vaginal discharge was conducted to assess the vaginal microbiota and inflammatory condition.
4 Results
There were no significant differences between the two groups in terms of age, disease severity, treatment methods, or treatment duration. The mean time of SARS-CoV-2 infection in post-treatment infection group is 15.83 ± 9.74 days after cervical treatment, and the mean time of SARS-CoV-2 infection in treatment after infection recovered group is 39.13 ± 9.80 days before cervical treatment.
Compared with the control group, the experimental group had a lower wound healing rate 83.77 (62.04, 97.09) % vs. 98.64(97.10, 99.46)%, p < 0.001. The Box and whisker plot of the wound healing rate for the two group was shown in Supplementary Figure 1. Also the experimental group had a higher scab non-shedding rate 24.14% (7/29) on the 30th day after treatment when compared with control group (24.14% vs. 3.22%, p = 0.024).
We conducted further analysis to explore the potential correlation between delayed wound healing and the timing of SARS-CoV-2 infection in the experimental group. Out of the 7 patients who experienced delayed wound healing, 5 patients (71.43%) contracted SARS-CoV-2 within 2 weeks after undergoing cervical treatment, only 2 (2/7 or 28.57%) patients infected with delayed wound healing infected SARS-CoV-2 2 weeks after cervical treatment. It is worth noting that in the control group, there was only 1 patient who experienced poor wound healing, and the cervical treatment of this patient was conducted 45 days after SARS-CoV-2 infection.
Pre-treatment vaginal discharge tests for all patients exhibited normal levels of white blood cell counts and leukocyte esterase, with no detection of trichomonas vaginalis, pseudohyphae, or budding spores in both pre and post-treatment assessments. We compared the correlation between white blood cell count and leukocyte esterase in vaginal discharge with cervical healing. The results of the chi-square test for contingency table revealed no significant correlation between white blood cell count or leukocyte esterase in vaginal discharge and delayed wound healing of the cervix (defined as the non-shedding of scabs after 1 month of treatment) (p = 0.947 and 0.970, respectively).
https://www.frontiersin.org/journals/medicine/articles/10.3389/fmed.2023.1222767/full
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
You must only respond with information found in the text block. You must begin with an introduction paragraph and finish with a concluding paragraph. The body must only be bullet points. | Can you provide a summary of the text, focusing on Lawrence G. Roberts' and Leonard Kleinrock's contributions to the origin of the internet? | The first recorded description of the social interactions that could be enabled
through networking was a series of memos written by J.C.R. Licklider of MIT in
August 1962 discussing his “Galactic Network” concept. He envisioned a globally
interconnected set of computers through which everyone could quickly access
data and programs from any site. In spirit, the concept was very much like the
Internet of today. Licklider was the first head of the computer research program
at DARPA,4
starting in October 1962. While at DARPA he convinced his successors
at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts,
of the importance of this networking concept.
Leonard Kleinrock at MIT published the first paper on packet switching theory in
July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts
of the theoretical feasibility of communications using packets rather than
circuits, which was a major step along the path towards computer networking.
The other key step was to make the computers talk together. To explore this,
in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in
Mass. to the Q-32 in California with a low speed dial-up telephone line creating
the first (however small) wide-area computer network ever built. The result
of this experiment was the realization that the time-shared computers could
work well together, running programs and retrieving data as necessary on the
remote machine, but that the circuit switched telephone system was totally
inadequate for the job. Kleinrock’s conviction of the need for packet switching
was confirmed.
In late 1966 Roberts went to DARPA to develop the computer network concept
and quickly put together his plan for the “ARPANET”, publishing it in 1967. At the
conference where he presented the paper, there was also a paper on a packet
network concept from the UK by Donald Davies and Roger Scantlebury of NPL.
Scantlebury told Roberts about the NPL work as well as that of Paul Baran and
others at RAND. The RAND group had written a paper on packet switching
networks for secure voice in the military in 1964. It happened that the work at
MIT (1961-1967), at RAND (1962-1965), and at NPL (1964-1967) had all proceeded in
parallel without any of the researchers knowing about the other work. The word
“packet” was adopted from the work at NPL and the proposed line speed to be
used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps.5
In August 1968, after Roberts and the DARPA funded community had refined the
overall structure and specifications for the ARPANET, an RFQ was released by
DARPA for the development of one of the key components, the packet switches
called Interface Message Processors (IMP’s).
The RFQ was won in December 1968 by a group headed by Frank Heart at Bolt
Beranek and Newman (BBN). As the BBN team worked on the IMP’s with Bob
Kahn playing a major role in the overall ARPANET architectural design, the
network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the network
measurement system was prepared by Kleinrock’s team at UCLA.6
Due to Kleinrock’s early development of packet switching theory and his focus on analysis, design
and measurement, his Network Measurement Center at UCLA was selected to be the first node on
the ARPANET. All this came together in September 1969 when BBN installed the first IMP at UCLA
and the first host computer was connected. Doug Engelbart’s project on “Augmentation of Human
Intellect” (which included NLS, an early hypertext system) at Stanford Research Institute (SRI)
provided a second node. SRI supported the Network Information Center, led by Elizabeth (Jake)
Feinler and including functions such as maintaining tables of host name to address mapping as well
as a directory of the RFC’s.
One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent
from Kleinrock’s laboratory to SRI. Two more nodes were added at UC Santa Barbara and University
of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and
Burton Fried at UCSB investigating methods for display of mathematical functions using storage
displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at
Utah investigating methods of 3-D representations over the net. Thus, by the end of 1969, four host
computers were connected together into the initial ARPANET, and the budding Internet was off the
ground. Even at this early stage, it should be noted that the networking research incorporated both
work on the underlying network and work on how to utilize the network. This tradition continues to
this day.
Computers were added quickly to the ARPANET during the following years, and work proceeded on
completing a functionally complete Host-to-Host protocol and other network software. In December
1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET
Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed
implementing NCP during the period 1971-1972, the network users finally could begin to develop
applications.
In October 1972, Kahn organized a large, very successful demonstration of the ARPANET at the
International Computer Communication Conference (ICCC). This was the first public demonstration
of this new network technology to the public. It was also in 1972 that the initial “hot” application,
electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send
and read software, motivated by the need of the ARPANET developers for an easy coordination
mechanism. In July, Roberts expanded its utility by writing the first email utility program to list,
selectively read, file, forward, and respond to messages. From there email took off as the largest
network application for over a decade. This was a harbinger of the kind of activity we see on the
World Wide Web today, namely, the enormous growth of all kinds of “people-to-people” traffic. | You must only respond with information found in the text block. You must begin with an introduction paragraph and finish with a concluding paragraph. The body must only be bullet points.
Can you provide a summary of the text, focusing on Lawrence G. Roberts' and Leonard Kleinrock's contributions to the origin of the internet?
The first recorded description of the social interactions that could be enabled
through networking was a series of memos written by J.C.R. Licklider of MIT in
August 1962 discussing his “Galactic Network” concept. He envisioned a globally
interconnected set of computers through which everyone could quickly access
data and programs from any site. In spirit, the concept was very much like the
Internet of today. Licklider was the first head of the computer research program
at DARPA,4
starting in October 1962. While at DARPA he convinced his successors
at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts,
of the importance of this networking concept.
Leonard Kleinrock at MIT published the first paper on packet switching theory in
July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts
of the theoretical feasibility of communications using packets rather than
circuits, which was a major step along the path towards computer networking.
The other key step was to make the computers talk together. To explore this,
in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in
Mass. to the Q-32 in California with a low speed dial-up telephone line creating
the first (however small) wide-area computer network ever built. The result
of this experiment was the realization that the time-shared computers could
work well together, running programs and retrieving data as necessary on the
remote machine, but that the circuit switched telephone system was totally
inadequate for the job. Kleinrock’s conviction of the need for packet switching
was confirmed.
In late 1966 Roberts went to DARPA to develop the computer network concept
and quickly put together his plan for the “ARPANET”, publishing it in 1967. At the
conference where he presented the paper, there was also a paper on a packet
network concept from the UK by Donald Davies and Roger Scantlebury of NPL.
Scantlebury told Roberts about the NPL work as well as that of Paul Baran and
others at RAND. The RAND group had written a paper on packet switching
networks for secure voice in the military in 1964. It happened that the work at
MIT (1961-1967), at RAND (1962-1965), and at NPL (1964-1967) had all proceeded in
parallel without any of the researchers knowing about the other work. The word
“packet” was adopted from the work at NPL and the proposed line speed to be
used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps.5
In August 1968, after Roberts and the DARPA funded community had refined the
overall structure and specifications for the ARPANET, an RFQ was released by
DARPA for the development of one of the key components, the packet switches
called Interface Message Processors (IMP’s).
The RFQ was won in December 1968 by a group headed by Frank Heart at Bolt
Beranek and Newman (BBN). As the BBN team worked on the IMP’s with Bob
Kahn playing a major role in the overall ARPANET architectural design, the
network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the network
measurement system was prepared by Kleinrock’s team at UCLA.6
Due to Kleinrock’s early development of packet switching theory and his focus on analysis, design
and measurement, his Network Measurement Center at UCLA was selected to be the first node on
the ARPANET. All this came together in September 1969 when BBN installed the first IMP at UCLA
and the first host computer was connected. Doug Engelbart’s project on “Augmentation of Human
Intellect” (which included NLS, an early hypertext system) at Stanford Research Institute (SRI)
provided a second node. SRI supported the Network Information Center, led by Elizabeth (Jake)
Feinler and including functions such as maintaining tables of host name to address mapping as well
as a directory of the RFC’s.
One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent
from Kleinrock’s laboratory to SRI. Two more nodes were added at UC Santa Barbara and University
of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and
Burton Fried at UCSB investigating methods for display of mathematical functions using storage
displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at
Utah investigating methods of 3-D representations over the net. Thus, by the end of 1969, four host
computers were connected together into the initial ARPANET, and the budding Internet was off the
ground. Even at this early stage, it should be noted that the networking research incorporated both
work on the underlying network and work on how to utilize the network. This tradition continues to
this day.
Computers were added quickly to the ARPANET during the following years, and work proceeded on
completing a functionally complete Host-to-Host protocol and other network software. In December
1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET
Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed
implementing NCP during the period 1971-1972, the network users finally could begin to develop
applications.
In October 1972, Kahn organized a large, very successful demonstration of the ARPANET at the
International Computer Communication Conference (ICCC). This was the first public demonstration
of this new network technology to the public. It was also in 1972 that the initial “hot” application,
electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send
and read software, motivated by the need of the ARPANET developers for an easy coordination
mechanism. In July, Roberts expanded its utility by writing the first email utility program to list,
selectively read, file, forward, and respond to messages. From there email took off as the largest
network application for over a decade. This was a harbinger of the kind of activity we see on the
World Wide Web today, namely, the enormous growth of all kinds of “people-to-people” traffic. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | What are the core differences between fundamental analysis and technical analysis, in what they measure and how they are used? Include criticisms or downsides of each analysis to explain the differences in practice. | What Is Fundamental Analysis?
Fundamental analysis is used in finance to evaluate the intrinsic value—the real worth—of a security, sector, or economy. It's used when scrutinizing income statements, balance sheets, and cash flow statements for a company's stock. Fundamental analysis is generally for those looking for long-term value.
As such, those performing this kind of analysis are trying to calculate how much the company will make in the future against its present net value. A stock trading below the company's intrinsic value is seen as promising, while the opposite is true for those priced below it. The outcome of the analysis can lead to buying, holding, or selling a security.
The Main Tools of Fundamental Analysis
This approach seeks to uncover the intrinsic value of a security, such as a stock or currency, using these main tools:
Financial statements: These include a company's income statement, balance sheet, and cash flow statement, which provide an economic snapshot and help assess its profitability, liquidity, and solvency. Analysts use these to determine financial ratios, multiples, and other mathematical representations of a firm's financial health.
Economic indicators: Macroeconomic indicators like gross domestic product growth, inflation, and unemployment rates are used to understand the economic environment in which a company operates. These indicators can affect consumer behavior and, therefore, a company's performance.
Interest rates: Central bank interest rates can significantly affect an investment's value. Higher interest rates generally lead to lower stock prices, while lower rates boost stock prices.
News and events: Company news, such as earnings reports, new contracts, and regulatory changes, affect stock prices.
Qualitative information: This includes details about management quality, industry cycles, competitive advantage, and other nonquantifiable factors that affect a company's stock.
1
Investors use these tools to see whether a security is undervalued or overvalued.
Criticisms of Fundamental Analysis
Here are some of the main criticisms of fundamental analysis:
Time-consuming: Fundamental analysis requires extensive research and data collection, which can be very time-intensive.
Subjectivity: As with technical analysis, different analysts may interpret the same data differently, leading to the claim that the results end up subjective.
Information lag: Financial reports and economic data are often released with a delay, potentially making the analysis outdated.
Difficulty quantifying qualitative factors: Aspects like management quality or brand value are hard to objectively measure.
Assumption of market efficiency: Fundamental analysis very often assumes that markets are efficient. Hence, long-term value (the prices on the stock market) will eventually match the stock's underlying reality or intrinsic value. However, the long-term may never come, and in the meantime, what's the best approach to obtain the gains from price changes?
Long-term focus: It may not be as useful for short-term trading decisions.
Overlooking market sentiment: Fundamental analysis may not adequately account for investor psychology and market trends. It may miss important price trends and patterns that technical analysis might catch.
Vulnerability to unexpected events: Sudden geopolitical or economic events can quickly render fundamental analysis irrelevant.
2
These criticisms highlight why some investors prefer technical analysis or combine both approaches.
What Is Technical Analysis?
Technical analysis evaluates financial assets, such as stocks, currencies, or commodities, by reviewing the historical price and volume data. Unlike fundamental analysis, which focuses on the intrinsic value of an asset, technical analysis examines the volume and price of shares over time.
2
Simplifying more than a little, where those using fundamental analysis portray themselves as the sober-minded investigators uncovering real value in the economy, those who use technical analysis start from the perspective that markets are inefficient and price patterns and trends in market data can be exploited for potential profit.
Fundamental and technical analyses are the major schools of thought for approaching the markets.
The Main Tools of Technical Analysis
Here are the tools most often used in technical analyses:
Technical analysis indicators: These are mathematical calculations based on price, volume, or open interest to predict future prices. The indicators are generally based on momentum or mean reversion.
Volume analysis: This studies the number of shares, lots, or contracts traded in a security or market during a certain period.
Relative strength: This metric compares the performance of an asset to a benchmark to gauge its momentum.
2
Chart pattern analysis: The study of price movements in a market pinpoints patterns that can suggest future activity.
3
Candlestick pattern analysis: This kind of financial chart used for price movements could indicate investor sentiment, market trends, or reversals of those trends.
4
Example candlestick pattern
A candlestick pattern on charts consists of rectangular "bodies" that show the opening and closing prices, with thin vertical lines called "wicks" or "shadows" extending above and below to indicate the high and low prices for the period. The body is typically colored differently (often green/white for up moves and red/black for down moves) to quickly convey whether the price closed higher or lower than it opened during that time frame.
Investopedia / Sabrina Jiang
Support and resistance: These are horizontal lines drawn on a price chart to indicate where a security's price will be unlikely to move beyond.
5
Trend analysis: This is interpreting past and present moves in the market to predict future asset prices. Historical prices and trading volume are the most often used.
6
Each element allows investors to analyze a share or market's behavior.
Criticisms of Technical Analysis
Trying to predict stock prices based on past trading data has long been a topic of heated discussion, with many academics and professional investors being skeptical about its effectiveness. Imagine trying to predict tomorrow's weather just by looking at past weather patterns. That's somewhat akin to what technical analysts do with stock prices.
Skepticism of the Efficiency of Markets
One of the main criticisms of technical analysis is that it goes against the efficient market hypothesis.
7
This economic theory suggests that stock prices already reflect all available information, making it impossible to consistently beat the market using any predefined strategy. Critics argue that even if technical analysis really worked, everyone would use it, and its advantages would quickly disappear. However, proponents of technical analysis counter that markets aren't always perfectly efficient, and that there's value in analyzing price trends and trading patterns.
Supposed Non-Objectivity of Technical Analysis
Another point of contention is the often subjective nature of technical analysis. Much like seeing shapes in clouds, different analysts might interpret the same chart patterns differently, leading to inconsistent predictions. Moreover, with so much financial data available today, there's a risk of "overfitting," or finding patterns that seem meaningful but are actually just coincidences --similar to how you might flip a coin 10 times, get heads each time, and mistakenly conclude the coin is rigged. This has led to concerns about data mining, where patterns that seem to have worked in the past have no real predictive power for the future.
Despite these criticisms, technical analysis remains popular among many traders and investors. Some argue that technical analysis may appear to work in some cases due to a self-fulfilling prophecy: if enough traders follow the same technical signals, their collective actions could actually move the market in the predicted direction, at least in the short term. Others point out that technical analysis can be a useful tool for understanding market psychology and sentiment. By studying price movements and trading volumes, analysts might gain insights into the emotions driving buying and selling pressure, which can be valuable for identifying resistance and support levels, and timing entry and exit points in trades. | [question]
What are the core differences between fundamental analysis and technical analysis, in what they measure and how they are used? Include criticisms or downsides of each analysis to explain the differences in practice.
=====================
[text]
What Is Fundamental Analysis?
Fundamental analysis is used in finance to evaluate the intrinsic value—the real worth—of a security, sector, or economy. It's used when scrutinizing income statements, balance sheets, and cash flow statements for a company's stock. Fundamental analysis is generally for those looking for long-term value.
As such, those performing this kind of analysis are trying to calculate how much the company will make in the future against its present net value. A stock trading below the company's intrinsic value is seen as promising, while the opposite is true for those priced below it. The outcome of the analysis can lead to buying, holding, or selling a security.
The Main Tools of Fundamental Analysis
This approach seeks to uncover the intrinsic value of a security, such as a stock or currency, using these main tools:
Financial statements: These include a company's income statement, balance sheet, and cash flow statement, which provide an economic snapshot and help assess its profitability, liquidity, and solvency. Analysts use these to determine financial ratios, multiples, and other mathematical representations of a firm's financial health.
Economic indicators: Macroeconomic indicators like gross domestic product growth, inflation, and unemployment rates are used to understand the economic environment in which a company operates. These indicators can affect consumer behavior and, therefore, a company's performance.
Interest rates: Central bank interest rates can significantly affect an investment's value. Higher interest rates generally lead to lower stock prices, while lower rates boost stock prices.
News and events: Company news, such as earnings reports, new contracts, and regulatory changes, affect stock prices.
Qualitative information: This includes details about management quality, industry cycles, competitive advantage, and other nonquantifiable factors that affect a company's stock.
1
Investors use these tools to see whether a security is undervalued or overvalued.
Criticisms of Fundamental Analysis
Here are some of the main criticisms of fundamental analysis:
Time-consuming: Fundamental analysis requires extensive research and data collection, which can be very time-intensive.
Subjectivity: As with technical analysis, different analysts may interpret the same data differently, leading to the claim that the results end up subjective.
Information lag: Financial reports and economic data are often released with a delay, potentially making the analysis outdated.
Difficulty quantifying qualitative factors: Aspects like management quality or brand value are hard to objectively measure.
Assumption of market efficiency: Fundamental analysis very often assumes that markets are efficient. Hence, long-term value (the prices on the stock market) will eventually match the stock's underlying reality or intrinsic value. However, the long-term may never come, and in the meantime, what's the best approach to obtain the gains from price changes?
Long-term focus: It may not be as useful for short-term trading decisions.
Overlooking market sentiment: Fundamental analysis may not adequately account for investor psychology and market trends. It may miss important price trends and patterns that technical analysis might catch.
Vulnerability to unexpected events: Sudden geopolitical or economic events can quickly render fundamental analysis irrelevant.
2
These criticisms highlight why some investors prefer technical analysis or combine both approaches.
What Is Technical Analysis?
Technical analysis evaluates financial assets, such as stocks, currencies, or commodities, by reviewing the historical price and volume data. Unlike fundamental analysis, which focuses on the intrinsic value of an asset, technical analysis examines the volume and price of shares over time.
2
Simplifying more than a little, where those using fundamental analysis portray themselves as the sober-minded investigators uncovering real value in the economy, those who use technical analysis start from the perspective that markets are inefficient and price patterns and trends in market data can be exploited for potential profit.
Fundamental and technical analyses are the major schools of thought for approaching the markets.
The Main Tools of Technical Analysis
Here are the tools most often used in technical analyses:
Technical analysis indicators: These are mathematical calculations based on price, volume, or open interest to predict future prices. The indicators are generally based on momentum or mean reversion.
Volume analysis: This studies the number of shares, lots, or contracts traded in a security or market during a certain period.
Relative strength: This metric compares the performance of an asset to a benchmark to gauge its momentum.
2
Chart pattern analysis: The study of price movements in a market pinpoints patterns that can suggest future activity.
3
Candlestick pattern analysis: This kind of financial chart used for price movements could indicate investor sentiment, market trends, or reversals of those trends.
4
Example candlestick pattern
A candlestick pattern on charts consists of rectangular "bodies" that show the opening and closing prices, with thin vertical lines called "wicks" or "shadows" extending above and below to indicate the high and low prices for the period. The body is typically colored differently (often green/white for up moves and red/black for down moves) to quickly convey whether the price closed higher or lower than it opened during that time frame.
Investopedia / Sabrina Jiang
Support and resistance: These are horizontal lines drawn on a price chart to indicate where a security's price will be unlikely to move beyond.
5
Trend analysis: This is interpreting past and present moves in the market to predict future asset prices. Historical prices and trading volume are the most often used.
6
Each element allows investors to analyze a share or market's behavior.
Criticisms of Technical Analysis
Trying to predict stock prices based on past trading data has long been a topic of heated discussion, with many academics and professional investors being skeptical about its effectiveness. Imagine trying to predict tomorrow's weather just by looking at past weather patterns. That's somewhat akin to what technical analysts do with stock prices.
Skepticism of the Efficiency of Markets
One of the main criticisms of technical analysis is that it goes against the efficient market hypothesis.
7
This economic theory suggests that stock prices already reflect all available information, making it impossible to consistently beat the market using any predefined strategy. Critics argue that even if technical analysis really worked, everyone would use it, and its advantages would quickly disappear. However, proponents of technical analysis counter that markets aren't always perfectly efficient, and that there's value in analyzing price trends and trading patterns.
Supposed Non-Objectivity of Technical Analysis
Another point of contention is the often subjective nature of technical analysis. Much like seeing shapes in clouds, different analysts might interpret the same chart patterns differently, leading to inconsistent predictions. Moreover, with so much financial data available today, there's a risk of "overfitting," or finding patterns that seem meaningful but are actually just coincidences --similar to how you might flip a coin 10 times, get heads each time, and mistakenly conclude the coin is rigged. This has led to concerns about data mining, where patterns that seem to have worked in the past have no real predictive power for the future.
Despite these criticisms, technical analysis remains popular among many traders and investors. Some argue that technical analysis may appear to work in some cases due to a self-fulfilling prophecy: if enough traders follow the same technical signals, their collective actions could actually move the market in the predicted direction, at least in the short term. Others point out that technical analysis can be a useful tool for understanding market psychology and sentiment. By studying price movements and trading volumes, analysts might gain insights into the emotions driving buying and selling pressure, which can be valuable for identifying resistance and support levels, and timing entry and exit points in trades.
https://www.investopedia.com/ask/answers/difference-between-fundamental-and-technical-analysis/
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
"================
<TEXT PASSAGE>
=======
[context document]
================
<QUESTION>
=======
[user request]
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." | I learned that Pokemon GO uses augmented reality technology. Discuss which aspects of augmented reality in Pokemon GO contribute to the health and wellness of its players. | The promise of an augmented reality game—Pokémon GO
Recent advances in technology facilitate the promoting of
physical activity (1-8). This is important due to the health
effect of physical activity and the reach and disseminability
of technology based programs/interventions. Specifically,
Pokémon GO (released in 2016) may promote a higher
degree of activity than many previous exergames such
as Nintendo Wii Fit (released in 2007). Pokémon GO uses
augmented reality (AR), which is similar to virtual reality
but the key concept for it is ‘utility’ instead of ‘presence’.
Pokémon GO encourages players to walk around, to socialize,
and even to make friends. AR is a promising concept in
that it allows for another type of tailoring of interventions,
namely geographic tailoring to an individual’s environment.
According to recent studies, Pokémon GO increased
physical activity and decreased sedentary behaviors (1-3);
however, its long-term effect is unknown at this point. In
one of the studies, players had gone back to their baseline
physical activity levels within six weeks of their first
installing the game (2). The real test of the technology
based AR game for promoting physical activity is whether
participants continue to engage in the game over longer
periods of time. It only took 19 days to reach 50 million
downloads and in September, 2016 Pokémon GO reached
500 million downloads. However, since September, 2016
player numbers are on the decline which raises the question
if this game is following the trajectory of most technology
games and only be maintained by those who are hard core
gamers.
Data show that respondents were somewhat more likely
to be younger, white, and female; however, there were no
significant demographic interactions for any behavioral
indicator (1-3). At least one study indicated that Pokémon
GO may be more beneficial for more obese individuals (3).
It was noted that if Pokémon GO players would increase
1,000 steps daily, and this behavior change would be
sustained, about 41 days of additional life expectancy would
be assumed (1). So the public health impact potential is
substantial.
It is recommended that researchers apply theoretical
constructs of health behavior theory (HBT) for behavior
change to promote physical activity (6,7). For example,
SuperBetter includes tailored educational elements based
on HBT, such as individualized assistance and feedback on
each player’s achievement/improvement. Systematic reviews
report that the most prevalent theoretical constructs of
health intervention games were self-monitoring, goal
setting, and self-reward (6,9). Health interventions, which
are designed based on theoretical frameworks, are likely
to lead to longer behavior change (7). Therefore, there is
a need for researchers to assess theoretical contents and
gamification elements of Pokémon GO (4,5).
T h e r e a r e m a n y g a m e s d e v e l o p e d i n a c a d e m i a
incorporating theoretical constructs for health behavior
change interventions; however, very few become popular.
Limited funding for development budgets and speed
of implementation including testing, publishing, and
implementing in a real-world make it challenging (10).
Therefore, it is worth while studying which, either entertainment-based games or educational elements-
based games, is more popular, engages long-term behavior
change, and elevates player’s motivation (7).
In addition, in order to increase the level of scientific
evidence for the interventions, it is important to develop
and adopt standardized protocols in terms of interventions,
populations, and outcomes. This effort eventually
will allow further comparison between differently
designed experimental studies to translate evidence-
based interventions to gaming-based approaches (10).
Collaborations between game developers, app designers,
and content experts in behavior health are necessary (7).
It is also recommended that researchers explore the
potential benefits of applying Pokémon GO to other areas
such as depression, heart disease, type 2 diabetes, etc.
and to diverse study subjects such as children, elders, and
people with disabilities (1,2,8,10). It has been reported that
older adult players have an awareness in playing games for
favorable health outcomes (8). Further research, therefore,
should be conducted to identify end user’s needs assessment
and specific GUI (graphical user interface) elements,
develop human-centered gaming design guidelines, and
evaluate usability issues.
Considering the characters (the Pokémons) of Pokémon
GO appear to be “on top of” the real world, not “in” the
real world, players may expect another stage of mixed reality
(combining AR and virtual reality) with the concept of
‘flexibility’ where their illusion is not easily broken. In other
words, when we lean in close the Pokémons get larger, and
when we walk around the virtual landscape changes with
respect to the position in the way a real object would (like
Minecraft—latest update released in 2016—as an example
of an interactive virtual world). This natural and intuitive
way of interaction simplifies the communication between
players and Pokémon GO, especially for players who have
no previous experience. Therefore, with a mixed reality
interface, it is expected that Pokémon GO would have the
potential to be more sustainable and effective.
Even though larger robust longitudinal studies
employing rigorous methodologies and further research on
negative effects such as injuries, road traffic incidents, game
addiction, etc. are still needed (1-3,10,11), Pokémon GO is
emerging as a potentially useful tool for motivational and
behavioral impacts on physical activity. | "================
<TEXT PASSAGE>
=======
The promise of an augmented reality game—Pokémon GO
Recent advances in technology facilitate the promoting of
physical activity (1-8). This is important due to the health
effect of physical activity and the reach and disseminability
of technology based programs/interventions. Specifically,
Pokémon GO (released in 2016) may promote a higher
degree of activity than many previous exergames such
as Nintendo Wii Fit (released in 2007). Pokémon GO uses
augmented reality (AR), which is similar to virtual reality
but the key concept for it is ‘utility’ instead of ‘presence’.
Pokémon GO encourages players to walk around, to socialize,
and even to make friends. AR is a promising concept in
that it allows for another type of tailoring of interventions,
namely geographic tailoring to an individual’s environment.
According to recent studies, Pokémon GO increased
physical activity and decreased sedentary behaviors (1-3);
however, its long-term effect is unknown at this point. In
one of the studies, players had gone back to their baseline
physical activity levels within six weeks of their first
installing the game (2). The real test of the technology
based AR game for promoting physical activity is whether
participants continue to engage in the game over longer
periods of time. It only took 19 days to reach 50 million
downloads and in September, 2016 Pokémon GO reached
500 million downloads. However, since September, 2016
player numbers are on the decline which raises the question
if this game is following the trajectory of most technology
games and only be maintained by those who are hard core
gamers.
Data show that respondents were somewhat more likely
to be younger, white, and female; however, there were no
significant demographic interactions for any behavioral
indicator (1-3). At least one study indicated that Pokémon
GO may be more beneficial for more obese individuals (3).
It was noted that if Pokémon GO players would increase
1,000 steps daily, and this behavior change would be
sustained, about 41 days of additional life expectancy would
be assumed (1). So the public health impact potential is
substantial.
It is recommended that researchers apply theoretical
constructs of health behavior theory (HBT) for behavior
change to promote physical activity (6,7). For example,
SuperBetter includes tailored educational elements based
on HBT, such as individualized assistance and feedback on
each player’s achievement/improvement. Systematic reviews
report that the most prevalent theoretical constructs of
health intervention games were self-monitoring, goal
setting, and self-reward (6,9). Health interventions, which
are designed based on theoretical frameworks, are likely
to lead to longer behavior change (7). Therefore, there is
a need for researchers to assess theoretical contents and
gamification elements of Pokémon GO (4,5).
T h e r e a r e m a n y g a m e s d e v e l o p e d i n a c a d e m i a
incorporating theoretical constructs for health behavior
change interventions; however, very few become popular.
Limited funding for development budgets and speed
of implementation including testing, publishing, and
implementing in a real-world make it challenging (10).
Therefore, it is worth while studying which, either entertainment-based games or educational elements-
based games, is more popular, engages long-term behavior
change, and elevates player’s motivation (7).
In addition, in order to increase the level of scientific
evidence for the interventions, it is important to develop
and adopt standardized protocols in terms of interventions,
populations, and outcomes. This effort eventually
will allow further comparison between differently
designed experimental studies to translate evidence-
based interventions to gaming-based approaches (10).
Collaborations between game developers, app designers,
and content experts in behavior health are necessary (7).
It is also recommended that researchers explore the
potential benefits of applying Pokémon GO to other areas
such as depression, heart disease, type 2 diabetes, etc.
and to diverse study subjects such as children, elders, and
people with disabilities (1,2,8,10). It has been reported that
older adult players have an awareness in playing games for
favorable health outcomes (8). Further research, therefore,
should be conducted to identify end user’s needs assessment
and specific GUI (graphical user interface) elements,
develop human-centered gaming design guidelines, and
evaluate usability issues.
Considering the characters (the Pokémons) of Pokémon
GO appear to be “on top of” the real world, not “in” the
real world, players may expect another stage of mixed reality
(combining AR and virtual reality) with the concept of
‘flexibility’ where their illusion is not easily broken. In other
words, when we lean in close the Pokémons get larger, and
when we walk around the virtual landscape changes with
respect to the position in the way a real object would (like
Minecraft—latest update released in 2016—as an example
of an interactive virtual world). This natural and intuitive
way of interaction simplifies the communication between
players and Pokémon GO, especially for players who have
no previous experience. Therefore, with a mixed reality
interface, it is expected that Pokémon GO would have the
potential to be more sustainable and effective.
Even though larger robust longitudinal studies
employing rigorous methodologies and further research on
negative effects such as injuries, road traffic incidents, game
addiction, etc. are still needed (1-3,10,11), Pokémon GO is
emerging as a potentially useful tool for motivational and
behavioral impacts on physical activity.
https://atm.amegroups.org/article/view/14051/pdf
================
<QUESTION>
=======
I learned that Pokemon GO uses augmented reality technology. Discuss which aspects of augmented reality in Pokemon GO contribute to the health and wellness of its players.
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | Could you provide three of the fastest growing chain restaurants in the United States, and tell me what has allowed them to be successful? They should have recent sales growth of at least 20%. | When people think of restaurant chains, they think of the most popular ones; McDonald's, KFC, Pizza Hut, and so on. However, the fastest-growing restaurant chains in the U.S. are names that might not be common to most eaters. On top of that, the rise of the health-conscious consumer has made an impact on what is most popular these days. The time of people not caring about what they eat is over. The following list of restaurants is ranked by sales growth. You might be surprised by what company proudly owns that coveted top spot. You might also find some investing opportunities along the way.
All numbers below as of May 21, 2019.
1. Mod Pizza
Sales Growth: 44.7%
Total Unit Growth: 33%
Estimated Sales Per Unit (ESPU) Growth: 1.2%
You can guess the specialty served at this restaurant, which is the fastest-growing chain in the U.S. for the second consecutive year. Mod Pizza falls into the fast-casual food category, with the owners focused on a socially conscious platform. Mod pays above the minimum wage, donates significantly to charity, and hires people from all walks of life, including those who have spent time in rehab or prison.
Mod Pizza grew 44.7% in the previous year with sales of $390.7 million. Mod Pizza opened in 2008, has over 400 locations, and is targeting total locations of 1,000 by 2024.
2. First Watch
Sales Growth: 33%
Total Unit Growth: 23%
Estimated Sales Per Unit (ESPU) Growth: 8.8%
First Watch focuses on breakfast food and caters to families. It's the fastest growing food chain in the family dining sector. The restaurant serves food that is health conscious, in tune with what customers are looking for in their dietary intake. In 2018, it was also voted one of the best places to work by the Business Intelligence Group.
The company saw sales growth of 33% from the previous year and currently has over 350 locations and is targeting a total number of locations to be around 600.
3. Shake Shack
Sales Growth: 27.3%
Total Unit Growth: 36%
Estimated Sales Per Unit (ESPU) Growth: -7.8%
Shake Shack needs no introduction. It is one of the most popular burger chains in the world, with locations in many countries, and it all started from a stand in New York City.
The company is a pioneer in how it attracts employees, such as testing a four-day workweek. Shake Shack saw sales growth of 27.3% from the previous year. The company is a powerhouse with over 250 locations worldwide, including 10 Shake Shacks in airports around the world.
4. Lazy Dog
Sales Growth: 27.1%
Total Unit Growth: 20%
Estimated Sales Per Unit (ESPU) Growth: 6.3%
Lazy Dog is about the atmosphere. They've created a restaurant that takes people to the Rockies with their cabin-like decor. The food focuses on popular American staples, such as burgers and ribs, and they've also tapped into the popular craft beer trend, offering plenty of craft beers to wash down all that food with.
The company saw sales growth of 27.1% and operates a little over 30 restaurants in a handful of states with plans to open further across the country.
5. The Habit Burger Grill
Sales Growth: 22.9%
Total Unit Growth: 18%
Estimated Sales Per Unit (ESPU) Growth: 2.9%
The Habit Burger Grill is a fast-casual restaurant whose specialty is charbroiled burgers. They saw sales growth of 22.9% and have approximately 250 locations. In March 2020, Habit Burger Grill was bought by Yum! Brands, the same company that owns Taco Bell and KFC.
6. Raising Cane's Chicken Fingers
Sales Growth: 22.5%
Total Unit Growth: 13.6%
Estimated Sales Per Unit (ESPU) Growth: 6.5%
First opened in 1996 in Baton Rouge, La., Raising Cane's Chicken Fingers offers— you guessed it—chicken fingers (never frozen) and its own dipping sauce that employees have to swear to never reveal its components. It's the fastest growing chain focused on chicken, with sales growth of 22.5% with almost 500 locations.
7. True Food Kitchen
Sales Growth: 22.2%
Total Unit Growth: 19%
Estimated Sales Per Unit (ESPU) Growth: -1.7%
True Food Kitchen is a health-focused brand that has grown rapidly and continues to do so. Its introduced delivery, added a loyalty program, and received an infusion of capital from Oprah Winfrey. The company saw sales grow by 22.2% and has 33 locations nationwide.
8. Tropical Smoothie Cafe
Sales Growth: 20.3%
Total Unit Growth: 14.5%
Estimated Sales Per Unit (ESPU) Growth: 4.3%
As the name would note, this restaurant chain specializes in smoothies. However, as part of its growth strategy, the company has focused on food offerings that have helped spur growth. 60% of sales come from smoothies and the rest from food.
The company saw sales growth of 20.3% and has 836 locations.
9. Jersey Mike's Subs
Sales Growth: 17.8%
Total Unit Growth: 11.2%
Estimated Sales Per Unit (ESPU) Growth: 5.1%
Jersey Mike’s Subs focuses on sandwiches, has grown rapidly, and has now started to work with Uber Eats and to offer drive-thru options. The company saw sales growth of 17.8% and has an astonishing 1,600 locations nationwide.
10. Blaze Fast-Fire'd Pizza
Sales Growth: 17.1%
Total Unit Growth: 24.9%
Estimated Sales Per Unit (ESPU) Growth: -10%
Blaze Fast-Fire'd Pizza focuses on pizzas and is known for its 11-inch pizza pie and has been testing a 14-inch pizza pie as well. The company is a leader in the fast-casual sector, with sales growth of 17.1% and 300 locations.
The Bottom Line
You might find some investment opportunities on this list, but it is also important to recognize what types of restaurant chains are growing the quickest and where food trends are moving, before making any decisions. | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
Could you provide three of the fastest growing chain restaurants in the United States, and tell me what has allowed them to be successful? They should have recent sales growth of at least 20%.
{passage 0}
==========
When people think of restaurant chains, they think of the most popular ones; McDonald's, KFC, Pizza Hut, and so on. However, the fastest-growing restaurant chains in the U.S. are names that might not be common to most eaters. On top of that, the rise of the health-conscious consumer has made an impact on what is most popular these days. The time of people not caring about what they eat is over. The following list of restaurants is ranked by sales growth. You might be surprised by what company proudly owns that coveted top spot. You might also find some investing opportunities along the way.
All numbers below as of May 21, 2019.
1. Mod Pizza
Sales Growth: 44.7%
Total Unit Growth: 33%
Estimated Sales Per Unit (ESPU) Growth: 1.2%
You can guess the specialty served at this restaurant, which is the fastest-growing chain in the U.S. for the second consecutive year. Mod Pizza falls into the fast-casual food category, with the owners focused on a socially conscious platform. Mod pays above the minimum wage, donates significantly to charity, and hires people from all walks of life, including those who have spent time in rehab or prison.
Mod Pizza grew 44.7% in the previous year with sales of $390.7 million. Mod Pizza opened in 2008, has over 400 locations, and is targeting total locations of 1,000 by 2024.
2. First Watch
Sales Growth: 33%
Total Unit Growth: 23%
Estimated Sales Per Unit (ESPU) Growth: 8.8%
First Watch focuses on breakfast food and caters to families. It's the fastest growing food chain in the family dining sector. The restaurant serves food that is health conscious, in tune with what customers are looking for in their dietary intake. In 2018, it was also voted one of the best places to work by the Business Intelligence Group.
The company saw sales growth of 33% from the previous year and currently has over 350 locations and is targeting a total number of locations to be around 600.
3. Shake Shack
Sales Growth: 27.3%
Total Unit Growth: 36%
Estimated Sales Per Unit (ESPU) Growth: -7.8%
Shake Shack needs no introduction. It is one of the most popular burger chains in the world, with locations in many countries, and it all started from a stand in New York City.
The company is a pioneer in how it attracts employees, such as testing a four-day workweek. Shake Shack saw sales growth of 27.3% from the previous year. The company is a powerhouse with over 250 locations worldwide, including 10 Shake Shacks in airports around the world.
4. Lazy Dog
Sales Growth: 27.1%
Total Unit Growth: 20%
Estimated Sales Per Unit (ESPU) Growth: 6.3%
Lazy Dog is about the atmosphere. They've created a restaurant that takes people to the Rockies with their cabin-like decor. The food focuses on popular American staples, such as burgers and ribs, and they've also tapped into the popular craft beer trend, offering plenty of craft beers to wash down all that food with.
The company saw sales growth of 27.1% and operates a little over 30 restaurants in a handful of states with plans to open further across the country.
5. The Habit Burger Grill
Sales Growth: 22.9%
Total Unit Growth: 18%
Estimated Sales Per Unit (ESPU) Growth: 2.9%
The Habit Burger Grill is a fast-casual restaurant whose specialty is charbroiled burgers. They saw sales growth of 22.9% and have approximately 250 locations. In March 2020, Habit Burger Grill was bought by Yum! Brands, the same company that owns Taco Bell and KFC.
6. Raising Cane's Chicken Fingers
Sales Growth: 22.5%
Total Unit Growth: 13.6%
Estimated Sales Per Unit (ESPU) Growth: 6.5%
First opened in 1996 in Baton Rouge, La., Raising Cane's Chicken Fingers offers— you guessed it—chicken fingers (never frozen) and its own dipping sauce that employees have to swear to never reveal its components. It's the fastest growing chain focused on chicken, with sales growth of 22.5% with almost 500 locations.
7. True Food Kitchen
Sales Growth: 22.2%
Total Unit Growth: 19%
Estimated Sales Per Unit (ESPU) Growth: -1.7%
True Food Kitchen is a health-focused brand that has grown rapidly and continues to do so. Its introduced delivery, added a loyalty program, and received an infusion of capital from Oprah Winfrey. The company saw sales grow by 22.2% and has 33 locations nationwide.
8. Tropical Smoothie Cafe
Sales Growth: 20.3%
Total Unit Growth: 14.5%
Estimated Sales Per Unit (ESPU) Growth: 4.3%
As the name would note, this restaurant chain specializes in smoothies. However, as part of its growth strategy, the company has focused on food offerings that have helped spur growth. 60% of sales come from smoothies and the rest from food.
The company saw sales growth of 20.3% and has 836 locations.
9. Jersey Mike's Subs
Sales Growth: 17.8%
Total Unit Growth: 11.2%
Estimated Sales Per Unit (ESPU) Growth: 5.1%
Jersey Mike’s Subs focuses on sandwiches, has grown rapidly, and has now started to work with Uber Eats and to offer drive-thru options. The company saw sales growth of 17.8% and has an astonishing 1,600 locations nationwide.
10. Blaze Fast-Fire'd Pizza
Sales Growth: 17.1%
Total Unit Growth: 24.9%
Estimated Sales Per Unit (ESPU) Growth: -10%
Blaze Fast-Fire'd Pizza focuses on pizzas and is known for its 11-inch pizza pie and has been testing a 14-inch pizza pie as well. The company is a leader in the fast-casual sector, with sales growth of 17.1% and 300 locations.
The Bottom Line
You might find some investment opportunities on this list, but it is also important to recognize what types of restaurant chains are growing the quickest and where food trends are moving, before making any decisions.
https://www.investopedia.com/articles/markets/062615/americas-10-fastestgrowing-restaurant-chains.asp |
Draw your answer from the above text only. Do not use any external information or prior knowledge. Limit your answer to 75 words or fewer. | Why didn't The Copyright Office recommend amending copyright laws? | Stop the Presses? Newspapers in the
Digital Age
During the past 20 years, more than 200 local daily newspapers have either reduced their
publication frequency or ceased publishing altogether. Among those that survived, many employ
a fraction of the journalists that they did at the turn of the 21st century, and many publish far
fewer original, local, and investigative news stories than they did previously. As a result, in order
to get local news, thousands of U.S. communities rely on “ghost newspapers” that are shells of
their former selves and may rarely employ full-time professional local journalists. Researchers
report that, among other societal effects, the lack of a daily newspaper to monitor local
governments and publicly traded companies can lead to increased financing costs to make up for
investors’ lack of trust.
In 2000, daily newspaper industry revenue peaked at $89 billion, adjusted for inflation in 2020
dollars. Twenty years later, the revenue had fallen by 80%. Although some large, national newspapers continue to thrive, the
newspaper industry as a whole has contracted. Websites and mobile apps enabling individuals to access news without a
subscription have increased competition for readers and advertising. Between that 20-year period, revenue gains from online
newspaper advertisements (from $0 to $3.1 billion) have not replaced revenue losses from print newspaper advertisements.
Some technology companies both compete and collaborate with newspaper publishers for online advertising revenue. For
example, in addition to competing with newspapers’ websites for display advertising revenue, Google sells ad spaces (i.e.,
areas on websites/mobile apps set aside for online advertisements) on behalf of online publishers. Likewise, Google buys ad
spaces on behalf of companies seeking to market goods or services to consumers with advertising (i.e., advertisers). For each
step of the process—known as the ad tech stack—Google earns commissions from both buyers and sellers. In January 2023,
the U.S. Department of Justice joined eight states in filing a lawsuit against Google, alleging that the company is violating
antitrust laws by engaging in unlawful conduct to monopolize the ad tech stack. An additional 16 states and the
Commonwealth of Puerto Rico filed a similar suit in 2021. In January 2021, a judicial panel combined this suit with multiple
suits filed by newspaper publishers, advertisers, and others. Google claims these allegations mischaracterize its business and
the degree of competition within the ad tech stack.
In addition, some online platforms—such as news aggregators (e.g., Apple News and Google News) and social media (e.g.,
Facebook)—can both enhance and diminish the ability of newspaper publishers to reach viewers. By acting as intermediaries
between newspapers and their readers, these online platforms may increase consumers’ awareness of newspapers’ websites
and prompt consumers to visit them. Alternatively, the headlines, snippets (small portions) of articles, and photographs
displayed by these online platforms may dissuade consumers from visiting newspaper publishers’ own websites. This may
impede the newspapers’ ability to collect data about their readers and generate revenues from their websites/mobile apps via
subscriptions and advertising.
The Copyright Act generally prohibits online platforms from distributing full articles from newspaper publishers without
their express consent. Courts determine whether a third party’s use of copyright material violates this law on a case-by-case
basis. In June 2022, the U.S. Copyright Office published a report titled Copyright Protections for Publishers at the request of
several members from the U.S. Senate Committee on the Judiciary. The report assessed the viability of establishing “ancillary
copyright” protections for press publishers that would require online news aggregators to pay publishers for using excerpts of
their content. The Copyright Office did not recommend amending copyright laws for this purpose, noting that stakeholders
who filed comments with the office emphasized that the publishers’ challenges were due more to competition issues rather
than copyright issues.
Some Members of 118th Congress have introduced bills that may help newspaper publishers. For example, the Advertising
Middlemen Endangering Rigorous Internet Competition Accountability Act (S. 1073) would impose certain restrictions
related to the ad tech stack. Online advertising revenues that would otherwise accrue to advertising technology firms could
flow to the newspaper publishers who sell advertising on their papers’ websites. The Journalism Competition and
Preservation Act of 2023 (S. 1094) would potentially increase the relative bargaining power of newspaper publishers.
| Stop the Presses? Newspapers in the
Digital Age
During the past 20 years, more than 200 local daily newspapers have either reduced their
publication frequency or ceased publishing altogether. Among those that survived, many employ
a fraction of the journalists that they did at the turn of the 21st century, and many publish far
fewer original, local, and investigative news stories than they did previously. As a result, in order
to get local news, thousands of U.S. communities rely on “ghost newspapers” that are shells of
their former selves and may rarely employ full-time professional local journalists. Researchers
report that, among other societal effects, the lack of a daily newspaper to monitor local
governments and publicly traded companies can lead to increased financing costs to make up for
investors’ lack of trust.
In 2000, daily newspaper industry revenue peaked at $89 billion, adjusted for inflation in 2020
dollars. Twenty years later, the revenue had fallen by 80%. Although some large, national newspapers continue to thrive, the
newspaper industry as a whole has contracted. Websites and mobile apps enabling individuals to access news without a
subscription have increased competition for readers and advertising. Between that 20-year period, revenue gains from online
newspaper advertisements (from $0 to $3.1 billion) have not replaced revenue losses from print newspaper advertisements.
Some technology companies both compete and collaborate with newspaper publishers for online advertising revenue. For
example, in addition to competing with newspapers’ websites for display advertising revenue, Google sells ad spaces (i.e.,
areas on websites/mobile apps set aside for online advertisements) on behalf of online publishers. Likewise, Google buys ad
spaces on behalf of companies seeking to market goods or services to consumers with advertising (i.e., advertisers). For each
step of the process—known as the ad tech stack—Google earns commissions from both buyers and sellers. In January 2023,
the U.S. Department of Justice joined eight states in filing a lawsuit against Google, alleging that the company is violating
antitrust laws by engaging in unlawful conduct to monopolize the ad tech stack. An additional 16 states and the
Commonwealth of Puerto Rico filed a similar suit in 2021. In January 2021, a judicial panel combined this suit with multiple
suits filed by newspaper publishers, advertisers, and others. Google claims these allegations mischaracterize its business and
the degree of competition within the ad tech stack.
In addition, some online platforms—such as news aggregators (e.g., Apple News and Google News) and social media (e.g.,
Facebook)—can both enhance and diminish the ability of newspaper publishers to reach viewers. By acting as intermediaries
between newspapers and their readers, these online platforms may increase consumers’ awareness of newspapers’ websites
and prompt consumers to visit them. Alternatively, the headlines, snippets (small portions) of articles, and photographs
displayed by these online platforms may dissuade consumers from visiting newspaper publishers’ own websites. This may
impede the newspapers’ ability to collect data about their readers and generate revenues from their websites/mobile apps via
subscriptions and advertising.
The Copyright Act generally prohibits online platforms from distributing full articles from newspaper publishers without
their express consent. Courts determine whether a third party’s use of copyright material violates this law on a case-by-case
basis. In June 2022, the U.S. Copyright Office published a report titled Copyright Protections for Publishers at the request of
several members from the U.S. Senate Committee on the Judiciary. The report assessed the viability of establishing “ancillary
copyright” protections for press publishers that would require online news aggregators to pay publishers for using excerpts of
their content. The Copyright Office did not recommend amending copyright laws for this purpose, noting that stakeholders
who filed comments with the office emphasized that the publishers’ challenges were due more to competition issues rather
than copyright issues.
Some Members of 118th Congress have introduced bills that may help newspaper publishers. For example, the Advertising
Middlemen Endangering Rigorous Internet Competition Accountability Act (S. 1073) would impose certain restrictions
related to the ad tech stack. Online advertising revenues that would otherwise accrue to advertising technology firms could
flow to the newspaper publishers who sell advertising on their papers’ websites. The Journalism Competition and
Preservation Act of 2023 (S. 1094) would potentially increase the relative bargaining power of newspaper publishers.
Instructions:
Draw your answer from the above text only. Do not use any external information or prior knowledge. Limit your answer to 75 words or fewer.
Question:
Why didn't The Copyright Office recommend amending copyright laws? |
Your answer must solely be derived from the information in the prompt itself. No outside sources or prior knowledge can be used. | Could you give me a summary of the history of sports betting from 1992 through 2011? | Financing Uncertainty
As is the case with commercial casinos, some tribal operations that expanded in recent years have had difficulty meeting or restructuring debt obligations. The Mashantucket Pequot Nation, which operates the Foxwoods casino, defaulted in 2009 and completed the restructuring of its debt of $2 billion on July 1, 2013.81 According to recent news reports, Foxwoods remains in a precarious financial position, with outstanding loans of around $1.7 billion.82 The Mohegan Tribal Gaming Authority, which refinanced $1.64 billion of long term debt in March 2012, announced layoffs involving hundreds of employees at the Mohegan Sun in several years since then.83 Because tribes are sovereign nations, there are emerging complications for lenders. For example, the Mohegan tribe’s constitution gives its Gaming Disputes Court, made up of a trial court and an appeals court, exclusive jurisdiction over disputes involving gambling. The Mohegan Sun 2015 Annual Report spelled out some of the potential legal issues:
We, the Tribe and our wholly-owned subsidiaries may not be subject to, or permitted to seek protection under, the federal bankruptcy laws since an Indian tribe and we, as an instrumentality of the Tribe, may not be a “person” eligible to be a debtor under the U.S. Bankruptcy Code. Therefore, our creditors may not be able to seek liquidation of our assets or other action under federal bankruptcy laws. Also, the Gaming Disputes Court may lack powers typically associated with a federal bankruptcy court, such as the power to non-consensually alter liabilities, direct the priority of creditors’ payments and liquidate certain assets. The Gaming Disputes Court is a court of limited jurisdiction and may not have jurisdiction over all creditors of ours or our subsidiaries or over all of the territory in which we and our subsidiaries carry on business.84
An ongoing dispute between Wells Fargo Bank and Saybrook Investors LLC, and Wisconsin’s Lac du Flambeau Band of Lake Superior Chippewa Indians could affect gaming financing. Wells Fargo has sued the tribe over its failure to make monthly payments on a $50 million tribal bond to consolidate debt and invest in a riverboat casino operation in Mississippi. The U.S. District Court for the Western District of Wisconsin in 2010 found that the bond deal was invalid because it had not been reviewed by the National Indian Gaming Commission, as the court said was required under IGRA.85 The complicated and long running dispute has continued after a remand in September 2011 by the Seventh Circuit Court of Appeals.86 It may take more years and possibly afew more appeals for a ruling on the validity of the bond documents other than the bond indenture.87
Pari Mutuel Betting
Legal in 43 states,88 pari mutuel betting is defined as “player banked betting with all the bets pooled and prizes awarded from the pool.”89 The most common examples in the United States are dog and horse racing and jai alai (a game played on a court with a ball and wicker racket), and other sporting events in which participants finish in ranked order.
In recent years, the industry has developed an extensive system of Internet and off track wagering. In 2000, Congress approved legislation to amend the definition of “interstate off track wager” in the Interstate Horseracing Act (15 U.S.C. §§3001 3007). Proponents claim the amendment permits tracks to accept bets online from individuals located in states where pari
mutuel betting is legal (although not necessarily where either off track or online betting is legal); the Department of Justice disagrees.90 A bill introduced in the 114th Congress, H.R. 707, would have clarified that the Wire Act and other laws do not apply to the Interstate Horseracing Act.
Despite the legal uncertainty, interstate pari mutuel betting with remote devices is growing through the use of advance deposit wagering (ADW). Players first set up accounts with companies such as Twinspires (owned by the Churchill Downs racetrack), Xpressbet, or TV Games Network. They then use the accounts to place bets on races over the phone, on a computer, with mobile devices, or with set top remote control devices linked to television channels that broadcast horse racing. The Oregon Racing Commission, which licenses and audits many of the largest firms taking advance deposit wagers, reports that online wagering via its licensed companies rose to $2.9 billion in 2015, from $962 million in 2005.91
Sports Betting
Congress in 1992 passed the Professional and Amateur Sports Protection Act (PASPA; P.L. 102
559) with strong support from the National Basketball Association, the National Football League (NFL), Major League Baseball, the National Hockey League, and the National Collegiate Athletic Association, among others. The law generally barred state governments from licensing, sponsoring, operating, advertising, promoting, or engaging in sports gambling.92 It contained exceptions for Nevada, Oregon, Delaware, and Montana, each of which allowed certain types ofsports betting at the time of passage.93 New Jersey failed to pass legislation in time to qualify for the PASPA exemption. Currently, Nevada is the only state to permit wagers on a full complement of sporting events and leagues.94 According to the University of Nevada, Las Vegas Center for Gaming Research, casino goers in Nevada wagered about $4.2 billion on sporting events in 2015, a rise from $3.4 billion in 2012.95
Delaware, which allowed only limited multigame or parlay betting96 on NFL contests at the time the 1992 law was passed, enacted a law in 2009 to create a state sports lottery. The NFL and other sports leagues challenged the law, and the U.S. Third Circuit Court of Appeals ruled that the state was limited to offering narrow betting, similar to what existed in 1992. The U.S. Supreme Court in May 2010 declined to hear an appeal, effectively ending Delaware’s effort to expand sports betting.97 After its voters authorized sports betting at casinos and racetracks in 2011, New Jersey mounted other court challenges to the constitutionality of PASPA.98 In February 2016, the U.S. Third Circuit Court of Appeals ruled that New Jersey’s sports wagering law conflicts with PASPA and could not be implemented.99 The Supreme Court may consider whether to hear New Jersey’s appeal of the lower court ruling.100 According to an estimate by AGA, Americans spent around $150 billion on illegal sports betting in 2015.101
Two bills have been introduced in the 114th Congress related to sports gambling. The New Jersey Betting and Equal Treatment Act of 2015 (H.R. 457) would expressly exempt New Jersey from PASPA. The Sports Gaming Opportunity Act (H.R. 416) would create an exemption from the PASPA prohibitions for any state that establishes sports gambling through laws enacted on or after January 1, 2015, and that go into effect no later than January 1, 2019.
Regulation of Internet Gambling
Federal Internet gambling legislation could benefit some sectors of the gambling industry more than others, depending on how it is crafted. State lottery officials, for example, have expressed concern that proposals that would give existing gambling establishments preference for online poker licenses could give those businesses an advantage in the market.102 By the same token, commercial casinos are worried that under the existing legal framework, online state lottery promotions, such as keno type games, could encroach on their turf. If the United States passes federal online gambling legislation and all states opt in during the next 12 months, H2 Gambling Capital predicts a U.S. online gambling market of $15 billion to $16 billion by 2021.103
Interest groups and gambling companies are at odds over remote gambling. One of the strongest proponents of legalized online poker is the Poker Players Alliance.104 Caesars Entertainment and MGM are among the large casino operators that have urged Congress to adopt federal legislation to regulate Internet gambling to avoid a patchwork of state regulations and different tax rates. These interests formed the Coalition for Consumer and Online Protection in 2014.105 Aligned against them are others, including most prominently the Coalition to Stop Internet Gambling.106 The North American Association of State and Provincial Lotteries (NASPL)107 and the National Conference of State Legislatures (NCSL)108 want individual states to have the right to legalize, license, and tax Internet gambling.109 In 2015, the National Council of Legislators from Gaming States (NCLGS) adopted a list of 10 policy standards for Internet gambling legislation addressing topics such as player protections, taxation, licensing, enforcement, payment processing, and geolocation standards.110 The National Governors Association largely echoes this view, and it has called on lawmakers to include state input before acting on any online gambling legislation.111
Many Indian tribes have declared their opposition to any federal gambling regime, although some of the larger tribes are now beginning to reverse their previous positions, viewing online gambling as a possible business opportunity. | System instruction: [Your answer must solely be derived from the information in the prompt itself. No outside sources or prior knowledge can be used.]
question: [Could you give me a summary of the history of sports betting from 1992 through 2011?]
context: [Financing Uncertainty
As is the case with commercial casinos, some tribal operations that expanded in recent years have had difficulty meeting or restructuring debt obligations. The Mashantucket Pequot Nation, which operates the Foxwoods casino, defaulted in 2009 and completed the restructuring of its debt of $2 billion on July 1, 2013.81 According to recent news reports, Foxwoods remains in a precarious financial position, with outstanding loans of around $1.7 billion.82 The Mohegan Tribal Gaming Authority, which refinanced $1.64 billion of long term debt in March 2012, announced layoffs involving hundreds of employees at the Mohegan Sun in several years since then.83 Because tribes are sovereign nations, there are emerging complications for lenders. For example, the Mohegan tribe’s constitution gives its Gaming Disputes Court, made up of a trial court and an appeals court, exclusive jurisdiction over disputes involving gambling. The Mohegan Sun 2015 Annual Report spelled out some of the potential legal issues:
We, the Tribe and our wholly-owned subsidiaries may not be subject to, or permitted to seek protection under, the federal bankruptcy laws since an Indian tribe and we, as an instrumentality of the Tribe, may not be a “person” eligible to be a debtor under the U.S. Bankruptcy Code. Therefore, our creditors may not be able to seek liquidation of our assets or other action under federal bankruptcy laws. Also, the Gaming Disputes Court may lack powers typically associated with a federal bankruptcy court, such as the power to non-consensually alter liabilities, direct the priority of creditors’ payments and liquidate certain assets. The Gaming Disputes Court is a court of limited jurisdiction and may not have jurisdiction over all creditors of ours or our subsidiaries or over all of the territory in which we and our subsidiaries carry on business.84
An ongoing dispute between Wells Fargo Bank and Saybrook Investors LLC, and Wisconsin’s Lac du Flambeau Band of Lake Superior Chippewa Indians could affect gaming financing. Wells Fargo has sued the tribe over its failure to make monthly payments on a $50 million tribal bond to consolidate debt and invest in a riverboat casino operation in Mississippi. The U.S. District Court for the Western District of Wisconsin in 2010 found that the bond deal was invalid because it had not been reviewed by the National Indian Gaming Commission, as the court said was required under IGRA.85 The complicated and long running dispute has continued after a remand in September 2011 by the Seventh Circuit Court of Appeals.86 It may take more years and possibly afew more appeals for a ruling on the validity of the bond documents other than the bond indenture.87
Pari Mutuel Betting
Legal in 43 states,88 pari mutuel betting is defined as “player banked betting with all the bets pooled and prizes awarded from the pool.”89 The most common examples in the United States are dog and horse racing and jai alai (a game played on a court with a ball and wicker racket), and other sporting events in which participants finish in ranked order.
In recent years, the industry has developed an extensive system of Internet and off track wagering. In 2000, Congress approved legislation to amend the definition of “interstate off track wager” in the Interstate Horseracing Act (15 U.S.C. §§3001 3007). Proponents claim the amendment permits tracks to accept bets online from individuals located in states where pari
mutuel betting is legal (although not necessarily where either off track or online betting is legal); the Department of Justice disagrees.90 A bill introduced in the 114th Congress, H.R. 707, would have clarified that the Wire Act and other laws do not apply to the Interstate Horseracing Act.
Despite the legal uncertainty, interstate pari mutuel betting with remote devices is growing through the use of advance deposit wagering (ADW). Players first set up accounts with companies such as Twinspires (owned by the Churchill Downs racetrack), Xpressbet, or TV Games Network. They then use the accounts to place bets on races over the phone, on a computer, with mobile devices, or with set top remote control devices linked to television channels that broadcast horse racing. The Oregon Racing Commission, which licenses and audits many of the largest firms taking advance deposit wagers, reports that online wagering via its licensed companies rose to $2.9 billion in 2015, from $962 million in 2005.91
Sports Betting
Congress in 1992 passed the Professional and Amateur Sports Protection Act (PASPA; P.L. 102
559) with strong support from the National Basketball Association, the National Football League (NFL), Major League Baseball, the National Hockey League, and the National Collegiate Athletic Association, among others. The law generally barred state governments from licensing, sponsoring, operating, advertising, promoting, or engaging in sports gambling.92 It contained exceptions for Nevada, Oregon, Delaware, and Montana, each of which allowed certain types ofsports betting at the time of passage.93 New Jersey failed to pass legislation in time to qualify for the PASPA exemption. Currently, Nevada is the only state to permit wagers on a full complement of sporting events and leagues.94 According to the University of Nevada, Las Vegas Center for Gaming Research, casino goers in Nevada wagered about $4.2 billion on sporting events in 2015, a rise from $3.4 billion in 2012.95
Delaware, which allowed only limited multigame or parlay betting96 on NFL contests at the time the 1992 law was passed, enacted a law in 2009 to create a state sports lottery. The NFL and other sports leagues challenged the law, and the U.S. Third Circuit Court of Appeals ruled that the state was limited to offering narrow betting, similar to what existed in 1992. The U.S. Supreme Court in May 2010 declined to hear an appeal, effectively ending Delaware’s effort to expand sports betting.97 After its voters authorized sports betting at casinos and racetracks in 2011, New Jersey mounted other court challenges to the constitutionality of PASPA.98 In February 2016, the U.S. Third Circuit Court of Appeals ruled that New Jersey’s sports wagering law conflicts with PASPA and could not be implemented.99 The Supreme Court may consider whether to hear New Jersey’s appeal of the lower court ruling.100 According to an estimate by AGA, Americans spent around $150 billion on illegal sports betting in 2015.101
Two bills have been introduced in the 114th Congress related to sports gambling. The New Jersey Betting and Equal Treatment Act of 2015 (H.R. 457) would expressly exempt New Jersey from PASPA. The Sports Gaming Opportunity Act (H.R. 416) would create an exemption from the PASPA prohibitions for any state that establishes sports gambling through laws enacted on or after January 1, 2015, and that go into effect no later than January 1, 2019.
Regulation of Internet Gambling
Federal Internet gambling legislation could benefit some sectors of the gambling industry more than others, depending on how it is crafted. State lottery officials, for example, have expressed concern that proposals that would give existing gambling establishments preference for online poker licenses could give those businesses an advantage in the market.102 By the same token, commercial casinos are worried that under the existing legal framework, online state lottery promotions, such as keno type games, could encroach on their turf. If the United States passes federal online gambling legislation and all states opt in during the next 12 months, H2 Gambling Capital predicts a U.S. online gambling market of $15 billion to $16 billion by 2021.103
Interest groups and gambling companies are at odds over remote gambling. One of the strongest proponents of legalized online poker is the Poker Players Alliance.104 Caesars Entertainment and MGM are among the large casino operators that have urged Congress to adopt federal legislation to regulate Internet gambling to avoid a patchwork of state regulations and different tax rates. These interests formed the Coalition for Consumer and Online Protection in 2014.105 Aligned against them are others, including most prominently the Coalition to Stop Internet Gambling.106 The North American Association of State and Provincial Lotteries (NASPL)107 and the National Conference of State Legislatures (NCSL)108 want individual states to have the right to legalize, license, and tax Internet gambling.109 In 2015, the National Council of Legislators from Gaming States (NCLGS) adopted a list of 10 policy standards for Internet gambling legislation addressing topics such as player protections, taxation, licensing, enforcement, payment processing, and geolocation standards.110 The National Governors Association largely echoes this view, and it has called on lawmakers to include state input before acting on any online gambling legislation.111
Many Indian tribes have declared their opposition to any federal gambling regime, although some of the larger tribes are now beginning to reverse their previous positions, viewing online gambling as a possible business opportunity.] |
This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. Use complete sentences. Do not use bullet points. Do not use the words "pros" and "cons" in your response.
Draw your answer from the below text only | Is increasing pay for IMA work by up to 15% a good idea? Respond in under 100 words. | Chapter 3: Remuneration of IMA Work
Question 1: Do you agree with our proposal to pay higher fees for IMB Work?
Please state yes/no/maybe and provide reasons.
Question 2: We are evaluating the possibility of increasing fees for IMB Work by
up to 15% compared to the current immigration legal aid fees. Within the range of
up to 15%, what percentage increase do you believe would be appropriate?
Consultation summary
31. In total there were 38 responses to both Question 1 and Question 2. Of the 38
responses to Question 1, 17 agreed with the proposal to pay higher fees for IMA
work (45%), 11 disagreed with the proposal (29%) and 10 responded with ‘maybe’
(26%). Of these, 34 respondents went on to provide reasons for their answer.
32. Most respondents agreed with the Government’s proposal to pay higher fees for IMA
Work but disagreed with the ‘up to 15%’ fee level and the focus on IMA Work. Upon
analysis, the overall sentiment of responses was negative (36 respondents, 95%). Of
the remaining responses (two respondents, 5%), one gave a neutral response and
another respondent gave a positive response – however no additional comments
were given.
33. There were many reasons given for why respondents either disagreed with the
proposal or agreed with the proposal overall but had a negative sentiment. These
have been summarised below.
Fee level
34. Most respondents agreed with the Government’s proposal to pay higher fees for IMA
Work but disagreed with the ‘up to 15%’ fee level, with only two respondents (5%)
agreeing with the ‘up to’ 15% rise. A reason given by one of these respondents was
that ‘lawyers/barristers do very hard important work and should be paid more to
reflect huge responsibility that comes with doing [IMA] work’.
35. There were varying views about what fee level should be required, but over half of
respondents stated that 15% is either insufficient or inappropriate, should be the
minimum increase and/or that the fee level should be higher than 15%. Many
Legal Aid Fees in the Illegal Migration Act:
The Government’s response to the consultation on fees in relation to the Illegal Migration Act
12
respondents did not provide an alternative rate, but of those that did, increases
ranged from 50% to 150% – these included that fees should be:
• 50% (six respondents);
• raised in line with inflation (three respondents);
• 50% for regular work carried out under the IMA; but raised to 100% for any work
that progresses to the High Court or beyond (three respondents); and
• 100–150%: reflective of inflation, and the lack of increases and subsequent cuts
to fees over the years (three respondents).
36. Of those who said 15% was insufficient or inappropriate, or that a higher rate should
be pursued, there were a multitude of reasons that formed the basis of this response.
For example, respondents stated that 15% would not incentivise capacity and that
increasing legal aid fees by ‘up to 15%’ was insufficient to reflect increased caseload,
and its subsequent impact on capacity within an already ‘overstretched’ sector. Views
were also raised that the proposed increase would not be sufficient to ‘address the
challenges the consultation identified’, especially considering the short timeframe for
making a suspensive claim (eight days). Another view was raised by respondents
around the expected complexity of the work.
37. Respondents also stated that 15% higher fees for IMA Work was insufficient because
legal aid rates have not increased, nor been augmented in line with inflation, since
1996 and furthermore were cut by 10% in 2011. One provider noted that 15% ‘does
little more than address inflationary increases in costs that providers have had to
absorb over the last two years’. Some also noted the depreciation of legal aid fees
over time. Respondents also remarked on a difference in levels of legal aid capacity
across different areas of the UK as an increasing challenge.
38. However, two respondents stated that an increase less than 15% should be pursued.
One stated that it should be 0% as the Government should move to ‘fixed competitive
fees’ acquired by chambers bidding. The other stated it should be 3% on the basis
that legal aid should be a fixed amount no matter the demand.
Scope of fee proposal
39. Some respondents suggested that the proposal should not be restricted to work done
under the IMA. Eight respondents said that the fee increase should be expanded to
all immigration legal aid (21%), two suggested that it should be expanded to all civil
legal aid (5%), and one suggested it should be expanded to all legal aid (3%). Three
other respondents raised the restrictive nature of the proposal but did not provide
further detail.
Legal Aid Fees in the Illegal Migration Act:
The Government’s response to the consultation on fees in relation to the Illegal Migration Act
13
40. Views included that a raise in fees for IMA Work only could ‘encourage a shift to this
work by providers, away from other essential work that needs to be done’ and could
lead to ‘perverse’ incentives to undertake this work, to the detriment of other
immigration work.
Additional measures
41. Across Questions 1 and 2, respondents stated that additional measures would be
required to improve the effectiveness of the 15% increase. The further measures
mentioned included: accreditation, interpreter fees and disbursements. Some also
stated that additional measures were needed but did not specify further. Those
responses have been summarised in Chapter 4.
Wider stakeholder feedback
42. At the stakeholder engagement events, on costs and fees many stakeholders noted
that the fees uplift should be expanded beyond IMA Work. They also shared the view
that limiting the uplift to IMA Work could risk shifting capacity away from other policy
priority areas and aggravate access to legal aid for other migrants. Several
stakeholders also noted that the 15% uplift is not high enough to increase capacity
and suggested increasing fees in line with inflation (which amounts to a 100% uplift.)
Other proposals included paying between £150–250 per hour as the adequate
compensation level that could incentivise providers and help build capacity.
43. In addition to the roundtable sessions, we also received an open letter from 66
providers who shared their views about the civil legal aid sector and provided various
capacity building measures, such as increasing hourly rates for all legal aid
Controlled Work in line with inflation since 1996 (based on the Bank of England
inflation calculator, this comes to around £100 an hour). They further called for a 50%
uplift on work undertaken under the IMA, on top of inflationary increases set out
above, to enable providers to train new staff and take on this work at pace. | This task requires you to answer questions based solely on the information provided in the prompt. You are not allowed to use any external resources or prior knowledge. Use complete sentences. Do not use bullet points. Do not use the words "pros" and "cons" in your response.
Draw your answer from the below text only
Chapter 3: Remuneration of IMA Work
Question 1: Do you agree with our proposal to pay higher fees for IMB Work?
Please state yes/no/maybe and provide reasons.
Question 2: We are evaluating the possibility of increasing fees for IMB Work by
up to 15% compared to the current immigration legal aid fees. Within the range of
up to 15%, what percentage increase do you believe would be appropriate?
Consultation summary
31. In total there were 38 responses to both Question 1 and Question 2. Of the 38
responses to Question 1, 17 agreed with the proposal to pay higher fees for IMA
work (45%), 11 disagreed with the proposal (29%) and 10 responded with ‘maybe’
(26%). Of these, 34 respondents went on to provide reasons for their answer.
32. Most respondents agreed with the Government’s proposal to pay higher fees for IMA
Work but disagreed with the ‘up to 15%’ fee level and the focus on IMA Work. Upon
analysis, the overall sentiment of responses was negative (36 respondents, 95%). Of
the remaining responses (two respondents, 5%), one gave a neutral response and
another respondent gave a positive response – however no additional comments
were given.
33. There were many reasons given for why respondents either disagreed with the
proposal or agreed with the proposal overall but had a negative sentiment. These
have been summarised below.
Fee level
34. Most respondents agreed with the Government’s proposal to pay higher fees for IMA
Work but disagreed with the ‘up to 15%’ fee level, with only two respondents (5%)
agreeing with the ‘up to’ 15% rise. A reason given by one of these respondents was
that ‘lawyers/barristers do very hard important work and should be paid more to
reflect huge responsibility that comes with doing [IMA] work’.
35. There were varying views about what fee level should be required, but over half of
respondents stated that 15% is either insufficient or inappropriate, should be the
minimum increase and/or that the fee level should be higher than 15%. Many
Legal Aid Fees in the Illegal Migration Act:
The Government’s response to the consultation on fees in relation to the Illegal Migration Act
12
respondents did not provide an alternative rate, but of those that did, increases
ranged from 50% to 150% – these included that fees should be:
• 50% (six respondents);
• raised in line with inflation (three respondents);
• 50% for regular work carried out under the IMA; but raised to 100% for any work
that progresses to the High Court or beyond (three respondents); and
• 100–150%: reflective of inflation, and the lack of increases and subsequent cuts
to fees over the years (three respondents).
36. Of those who said 15% was insufficient or inappropriate, or that a higher rate should
be pursued, there were a multitude of reasons that formed the basis of this response.
For example, respondents stated that 15% would not incentivise capacity and that
increasing legal aid fees by ‘up to 15%’ was insufficient to reflect increased caseload,
and its subsequent impact on capacity within an already ‘overstretched’ sector. Views
were also raised that the proposed increase would not be sufficient to ‘address the
challenges the consultation identified’, especially considering the short timeframe for
making a suspensive claim (eight days). Another view was raised by respondents
around the expected complexity of the work.
37. Respondents also stated that 15% higher fees for IMA Work was insufficient because
legal aid rates have not increased, nor been augmented in line with inflation, since
1996 and furthermore were cut by 10% in 2011. One provider noted that 15% ‘does
little more than address inflationary increases in costs that providers have had to
absorb over the last two years’. Some also noted the depreciation of legal aid fees
over time. Respondents also remarked on a difference in levels of legal aid capacity
across different areas of the UK as an increasing challenge.
38. However, two respondents stated that an increase less than 15% should be pursued.
One stated that it should be 0% as the Government should move to ‘fixed competitive
fees’ acquired by chambers bidding. The other stated it should be 3% on the basis
that legal aid should be a fixed amount no matter the demand.
Scope of fee proposal
39. Some respondents suggested that the proposal should not be restricted to work done
under the IMA. Eight respondents said that the fee increase should be expanded to
all immigration legal aid (21%), two suggested that it should be expanded to all civil
legal aid (5%), and one suggested it should be expanded to all legal aid (3%). Three
other respondents raised the restrictive nature of the proposal but did not provide
further detail.
Legal Aid Fees in the Illegal Migration Act:
The Government’s response to the consultation on fees in relation to the Illegal Migration Act
13
40. Views included that a raise in fees for IMA Work only could ‘encourage a shift to this
work by providers, away from other essential work that needs to be done’ and could
lead to ‘perverse’ incentives to undertake this work, to the detriment of other
immigration work.
Additional measures
41. Across Questions 1 and 2, respondents stated that additional measures would be
required to improve the effectiveness of the 15% increase. The further measures
mentioned included: accreditation, interpreter fees and disbursements. Some also
stated that additional measures were needed but did not specify further. Those
responses have been summarised in Chapter 4.
Wider stakeholder feedback
42. At the stakeholder engagement events, on costs and fees many stakeholders noted
that the fees uplift should be expanded beyond IMA Work. They also shared the view
that limiting the uplift to IMA Work could risk shifting capacity away from other policy
priority areas and aggravate access to legal aid for other migrants. Several
stakeholders also noted that the 15% uplift is not high enough to increase capacity
and suggested increasing fees in line with inflation (which amounts to a 100% uplift.)
Other proposals included paying between £150–250 per hour as the adequate
compensation level that could incentivise providers and help build capacity.
43. In addition to the roundtable sessions, we also received an open letter from 66
providers who shared their views about the civil legal aid sector and provided various
capacity building measures, such as increasing hourly rates for all legal aid
Controlled Work in line with inflation since 1996 (based on the Bank of England
inflation calculator, this comes to around £100 an hour). They further called for a 50%
uplift on work undertaken under the IMA, on top of inflationary increases set out
above, to enable providers to train new staff and take on this work at pace.
Is increasing pay for IMA work by up to 15% a good idea? Respond in under 100 words. |
The response should be accurate and concise, with little added conversational elements or tone. If you cannot provide the answer to the request based on the context given, make sure to simply state, "The information is not available at this time." | What impact does the FDA expect the nonprescription availability of Opill to have on unintended pregnancies? | 3/10/24, 10:58 AM FDA Approves First Nonprescription Daily Oral Contraceptive | FDA
FDA NEWS RELEASE
FDA Approves First Nonprescription Daily Oral
Contraceptive
For Immediate Release:
July 13, 2023
Espanol (https:/Awww.fda.gov/news-events/press-announcements/la-fda-aprueba-el-primer-anticonceptivo-oral-diario-sin-receta)
Today, the U.S. Food and Drug Administration approved Opill (norgestrel) tablet for nonprescription use to
prevent pregnancy— the first daily oral contraceptive approved for use in the U.S. without a prescription.
Approval of this progestin-only oral contraceptive pill provides an option for consumers to purchase oral
contraceptive medicine without a prescription at drug stores, convenience stores and grocery stores, as well as
online.
The timeline for availability and price of this nonprescription product is determined by the manufacturer.
Other approved formulations and dosages of other oral contraceptives will remain available by prescription
only.
“Today’s approval marks the first time a nonprescription daily oral contraceptive will be an
available option for millions of people in the United States,” said Patrizia Cavazzoni, M.D.,
director of the FDA’s Center for Drug Evaluation and Research. “When used as directed, daily
oral contraception is safe and is expected to be more effective than currently available
nonprescription contraceptive methods in preventing unintended pregnancy.”
Nonprescription availability of Opill may reduce barriers to access by allowing individuals to obtain an oral
contraceptive without the need to first see a health care provider. Almost half of the 6.1 million pregnancies in
the U.S. each year are unintended. Unintended pregnancies have been linked to negative maternal and
perinatal outcomes, including reduced likelihood of receiving early prenatal care and increased risk of preterm
delivery, with associated adverse neonatal, developmental and child health outcomes. Availability of
nonprescription Opill may help reduce the number of unintended pregnancies and their potential negative
impacts.
The contraceptive efficacy of norgestrel was established with the original approval for prescription use in 1973.
ription-nonprescription-rx-ote-switches) to switch norgestrel from a prescription to an over-the-
counter product. For approval of a product for use in the nonprescription setting, the FDA requires that the
applicant demonstrate
(https://www.accessdata.fda.gov/drugsatfda_docs/nda/2023/0170310rigisogiSumR.pdf) that the product
can be used by consumers safely and effectively, relying only on the nonprescription drug labeling without any
assistance from a health care professional. Studies showed that consumer understanding of information on the
Opill Drug Facts label was high overall and that a high proportion of consumers understood the label
instructions, supporting their ability to properly use the drug when it is available as an over-the-counter
product. When properly used, Opill is safe and effective.
https:/Awww.fda.gov/news-events/press-announcements/fda-approves-first-nonprescription-daily-oral-contraceptive
3/10/24, 10:58 AM FDA Approves First Nonprescription Daily Oral Contraceptive | FDA
Opill should be taken at the same time every day; adherence to daily use at the same time of day is important
for the effectiveness of Opill. Using medications that interact with Opill can result in decreased efficacy of Opill
or the other medication, or both, potentially resulting in unintended pregnancy.
The most common side effects of Opill include irregular bleeding, headaches, dizziness, nausea, increased
appetite, abdominal pain, cramps or bloating.
Opill should not be used by those who have or have ever had breast cancer. Consumers who have any other
form of cancer should ask a doctor before use. Opill also should not be used together with another hormonal
birth control product such as another oral contraceptive tablet, a vaginal ring, a contraceptive patch, a
contraceptive implant, a contraceptive injection or an IUD (intra-uterine device).
Use of Opill may be associated with changes in vaginal bleeding patterns, such as irregular spotting and
prolonged bleeding. Consumers should inform a health care provider if they develop repeated vaginal bleeding
after sex, or prolonged episodes of bleeding or amenorrhea (absence of menstrual period). Individuals who
miss two periods (or have missed a single period and have missed doses of Opill) or suspect they may be
pregnant should take a pregnancy test. Consumers should discontinue Opill if pregnancy is confirmed.
Opill is not for use as emergency contraception and does not prevent pregnancy after unprotected sex. Oral
contraceptives do not protect against transmission of HIV, AIDS and other sexually transmitted diseases such
as chlamydia, genital herpes, genital warts, gonorrhea, hepatitis B and syphilis. Condoms should be used to
prevent sexually transmitted diseases.
The FDA granted the approval to Laboratoire HRA Pharma, recently acquired by Perrigo Company ple.
Related Information
* Drugs@FDA: Opill (http://www.accessdata.fda.gov/scripts/cder/daf/index.cfm?
event=overview.process&varAppINo=017031),
* Decisional Memo
(https://www.accessdata.fda.gov/drugsatfda_docs/nda/2023/0170310rigisog1SumR.pdf)
© Opill (0.075mg Oral Norgestrel Tablet) Information (http://www.fda.gov/drugs/postmarket-drug-safety-
information-patients-and-providers/opill-oo75mg-oral-norgestrel-tablet-information),
HEF
The FDA, an agency within the U.S. Department of Health and Human Services, protects the public health by
assuring the safety, effectiveness, and security of human and veterinary drugs, vaccines and other biological
products for human use, and medical devices. The agency also is responsible for the safety and security of our
nation’s food supply, cosmetics, dietary supplements, products that give off electronic radiation, and for
regulating tobacco products.
Inquiries
Media:
Jeremy Kahn (mailto: Jeremy. [email protected])
& (301) 796-8671
https:/Awww.fda.gov/news-events/press-announcements/fda-approves-first-nonprescription-daily-oral-contraceptive
3/10/24, 10:58 AM FDA Approves First Nonprescription Daily Oral Contraceptive | FDA
Consumer:
&. 888-INFO-FDA
Was this helpful?
@ More Press Announcements (/news-events/newsroom/press-announcements)
https://Awww.fda.gov/news-events/press-announcements/fda-approves-first-nonprescription-daily-oral-contraceptive 3/3 | The response should be accurate and concise, with little added conversational elements or tone. If you cannot provide the answer to the request based on the context given, make sure to simply state, "The information is not available at this time."
What impact does the FDA expect the nonprescription availability of Opill to have on unintended pregnancies?
3/10/24, 10:58 AM FDA Approves First Nonprescription Daily Oral Contraceptive | FDA
FDA NEWS RELEASE
FDA Approves First Nonprescription Daily Oral
Contraceptive
For Immediate Release:
July 13, 2023
Espanol (https:/Awww.fda.gov/news-events/press-announcements/la-fda-aprueba-el-primer-anticonceptivo-oral-diario-sin-receta)
Today, the U.S. Food and Drug Administration approved Opill (norgestrel) tablet for nonprescription use to
prevent pregnancy— the first daily oral contraceptive approved for use in the U.S. without a prescription.
Approval of this progestin-only oral contraceptive pill provides an option for consumers to purchase oral
contraceptive medicine without a prescription at drug stores, convenience stores and grocery stores, as well as
online.
The timeline for availability and price of this nonprescription product is determined by the manufacturer.
Other approved formulations and dosages of other oral contraceptives will remain available by prescription
only.
“Today’s approval marks the first time a nonprescription daily oral contraceptive will be an
available option for millions of people in the United States,” said Patrizia Cavazzoni, M.D.,
director of the FDA’s Center for Drug Evaluation and Research. “When used as directed, daily
oral contraception is safe and is expected to be more effective than currently available
nonprescription contraceptive methods in preventing unintended pregnancy.”
Nonprescription availability of Opill may reduce barriers to access by allowing individuals to obtain an oral
contraceptive without the need to first see a health care provider. Almost half of the 6.1 million pregnancies in
the U.S. each year are unintended. Unintended pregnancies have been linked to negative maternal and
perinatal outcomes, including reduced likelihood of receiving early prenatal care and increased risk of preterm
delivery, with associated adverse neonatal, developmental and child health outcomes. Availability of
nonprescription Opill may help reduce the number of unintended pregnancies and their potential negative
impacts.
The contraceptive efficacy of norgestrel was established with the original approval for prescription use in 1973.
ription-nonprescription-rx-ote-switches) to switch norgestrel from a prescription to an over-the-
counter product. For approval of a product for use in the nonprescription setting, the FDA requires that the
applicant demonstrate
(https://www.accessdata.fda.gov/drugsatfda_docs/nda/2023/0170310rigisogiSumR.pdf) that the product
can be used by consumers safely and effectively, relying only on the nonprescription drug labeling without any
assistance from a health care professional. Studies showed that consumer understanding of information on the
Opill Drug Facts label was high overall and that a high proportion of consumers understood the label
instructions, supporting their ability to properly use the drug when it is available as an over-the-counter
product. When properly used, Opill is safe and effective.
https:/Awww.fda.gov/news-events/press-announcements/fda-approves-first-nonprescription-daily-oral-contraceptive
3/10/24, 10:58 AM FDA Approves First Nonprescription Daily Oral Contraceptive | FDA
Opill should be taken at the same time every day; adherence to daily use at the same time of day is important
for the effectiveness of Opill. Using medications that interact with Opill can result in decreased efficacy of Opill
or the other medication, or both, potentially resulting in unintended pregnancy.
The most common side effects of Opill include irregular bleeding, headaches, dizziness, nausea, increased
appetite, abdominal pain, cramps or bloating.
Opill should not be used by those who have or have ever had breast cancer. Consumers who have any other
form of cancer should ask a doctor before use. Opill also should not be used together with another hormonal
birth control product such as another oral contraceptive tablet, a vaginal ring, a contraceptive patch, a
contraceptive implant, a contraceptive injection or an IUD (intra-uterine device).
Use of Opill may be associated with changes in vaginal bleeding patterns, such as irregular spotting and
prolonged bleeding. Consumers should inform a health care provider if they develop repeated vaginal bleeding
after sex, or prolonged episodes of bleeding or amenorrhea (absence of menstrual period). Individuals who
miss two periods (or have missed a single period and have missed doses of Opill) or suspect they may be
pregnant should take a pregnancy test. Consumers should discontinue Opill if pregnancy is confirmed.
Opill is not for use as emergency contraception and does not prevent pregnancy after unprotected sex. Oral
contraceptives do not protect against transmission of HIV, AIDS and other sexually transmitted diseases such
as chlamydia, genital herpes, genital warts, gonorrhea, hepatitis B and syphilis. Condoms should be used to
prevent sexually transmitted diseases.
The FDA granted the approval to Laboratoire HRA Pharma, recently acquired by Perrigo Company ple.
Related Information
* Drugs@FDA: Opill (http://www.accessdata.fda.gov/scripts/cder/daf/index.cfm?
event=overview.process&varAppINo=017031),
* Decisional Memo
(https://www.accessdata.fda.gov/drugsatfda_docs/nda/2023/0170310rigisog1SumR.pdf)
© Opill (0.075mg Oral Norgestrel Tablet) Information (http://www.fda.gov/drugs/postmarket-drug-safety-
information-patients-and-providers/opill-oo75mg-oral-norgestrel-tablet-information),
HEF
The FDA, an agency within the U.S. Department of Health and Human Services, protects the public health by
assuring the safety, effectiveness, and security of human and veterinary drugs, vaccines and other biological
products for human use, and medical devices. The agency also is responsible for the safety and security of our
nation’s food supply, cosmetics, dietary supplements, products that give off electronic radiation, and for
regulating tobacco products.
Inquiries
Media:
Jeremy Kahn (mailto: Jeremy. [email protected])
& (301) 796-8671
https:/Awww.fda.gov/news-events/press-announcements/fda-approves-first-nonprescription-daily-oral-contraceptive
3/10/24, 10:58 AM FDA Approves First Nonprescription Daily Oral Contraceptive | FDA
Consumer:
&. 888-INFO-FDA
Was this helpful?
@ More Press Announcements (/news-events/newsroom/press-announcements)
https://Awww.fda.gov/news-events/press-announcements/fda-approves-first-nonprescription-daily-oral-contraceptive 3/3 |
Please limit your knowledge to the document. Avoid generalizations and ensure accuracy by directly referencing the document's arguments and examples. | What optional mitigations does the NSA recommend for Windows infrastructures against BlackLotus? | National Security Agency | Cybersecurity Information
BlackLotus Mitigation Guide
Executive summary
BlackLotus is a recently publicized malware product garnering significant attention within tech
media. Similar to 2020’s BootHole (CVE-2020-10713), BlackLotus takes advantage of a boot
loader flaw—specifically CVE-2022-21894 Secure Boot bypass known as “Baton Drop”—to
take control of an endpoint from the earliest phase of software boot. Microsoft ® issued patches
for supported versions of Windows to correct boot loader logic. However, patches were not
issued to revoke trust in unpatched boot loaders via the Secure Boot Deny List Database
(DBX). Administrators should not consider the threat fully remediated as boot loaders
vulnerable to Baton Drop are still trusted by Secure Boot.
As described in this Cybersecurity Information Sheet (CSI), NSA recommends infrastructure
owners take action by hardening user executable policies and monitoring the integrity of the
boot partition. An optional advanced mitigation is to customize Secure Boot policy by adding
DBX records to Windows® endpoints or removing the Windows Production CA certificate from
Linux® endpoints.
BlackLotus boot security threat
NSA recognizes significant confusion regarding the threat posed by BlackLotus. Some
organizations use terms like “unstoppable,” “unkillable,” and “unpatchable” to describe the
threat. Other organizations believe there is no threat due to patches that Microsoft released in
January 2022 and early 2023 for supported versions of Windows. [1] The risk exists
somewhere between both extremes.
BlackLotus shares some characteristics with Boot Hole (CVE-2020-10713). [2] Instead of
breaking the Linux boot security chain, BlackLotus targets Windows boot by exploiting a flaw in
older boot loaders—also called boot managers—to set off a chain of malicious actions that
compromise endpoint security. Exploitation of Baton Drop (CVE-2022-21894) allows
BlackLotus to strip the Secure Boot policy and prevent its enforcement. Unlike Boot Hole, the
vulnerable boot loaders have not been added to the Secure Boot DBX revocation list. Because
the vulnerable boot loaders are not listed within the DBX, attackers can substitute fully patched
boot loaders with vulnerable versions to execute BlackLotus.
NSA recommends system administrators within DoD and other networks take action.
BlackLotus is not a firmware threat, but instead targets the earliest software stage of boot.
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
1
NSA | BlackLotus Mitigation Guide
Defensive software solutions can be configured to detect and prevent the installation of the
BlackLotus payload or the reboot event that starts its execution and implantation. NSA
believes that currently published patches could provide a false sense of security for some
infrastructures. Because BlackLotus integrates Shim and GRUB into its implantation routine,
Linux administrators should also be vigilant for variants affecting popular Linux distributions.
Mitigation recommendations
Action 1: Update recovery media and activate optional mitigations
Recommended for all Windows infrastructures. Not applicable to Linux infrastructures.
NSA recommends Windows administrators install the latest security patches for their
endpoints. Microsoft patches from May 2023 contain optional software mitigations to prevent
rollback of the boot manager and kernel to versions vulnerable to Baton Drop and BlackLotus.
The optional mitigations – including a Code Integrity Boot Policy – should be enabled after the
organization has updated its Windows installation, recovery, and diagnostic software to the
latest available versions. [3]
Infrastructure administrators should note that Windows 10 and 11 have applicable security
updates and ongoing mitigation deployments for BlackLotus. Older, unsupported Windows
versions will not receive the full complement of BlackLotus mitigation measures. Windows
infrastructures should migrate to supported versions of Windows if running an unsupported
release. [3]
Action 2: Harden defensive policies
Recommended for all infrastructures. The malware install process for BlackLotus places an
older Windows boot loader Extensible Firmware Interface (EFI) binary into the boot partition,
disables Memory Integrity, disables BitLocker, and reboots the device. Many endpoint security
products (e.g., Endpoint Detection and Response, host-based security suites, user-monitoring
packages) can be configured to block one or more of these events outside of a legitimate,
scheduled update. Configure defensive software to scrutinize changes to the EFI boot partition
in particular. Alternatively, leverage application allow lists to permit only known and trusted
executables.
Action 3: Monitor device integrity measurements and boot configuration
Recommended for most infrastructures. Many endpoint security products and firmware
monitoring tools provide integrity-scanning features. Configure these products and tools to
monitor the composition of the EFI boot partition. Leverage these tools to look for unexpected
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
2
NSA | BlackLotus Mitigation Guide
changes in bootmgfw.efi, bootmgr.efi, or the introduction of additional unexpected EFI binaries
(e.g., shimx64.efi or grubx64.efi). Changes to the boot partition are infrequent and warrant
additional scrutiny.
If unexpected changes are detected within the EFI boot partition, prevent the device from
rebooting. Endpoint and host defensive suites may allow creating rules or triggers that can be
paired with group policies to temporarily restrict reboot. Remediate the boot partition to a
known good state before permitting reboot. A reboot will execute EFI binaries and can implant
BlackLotus.
Microsoft has published specific information regarding the staging of BlackLotus components,
alterations to Windows registry values, and network indicators. Full specifics can be found at
the Microsoft Incident Response blog. [4]
Action 4: Customize UEFI Secure Boot
4.A. Instructions for Windows infrastructures. Expertly administered and exposed
infrastructures only. Not recommended due to limited long-term effectiveness.
BlackLotus relies upon older (pre-January 2022), signed Windows boot loader images to
implant a system. Secure Boot can be updated with DBX deny list hashes that prevent
executing older and vulnerable boot loaders. Public reporting [5] provides indications as to
which boot managers are observed exploited in the wild. In 2020, NSA published "UEFI
Secure Boot Customization" to provide guidance on modifying Secure Boot. Adding DBX
hashes qualifies as a partial customization action covered in section 4 "Customization,"
starting on page 7, and continuing through section 4.4.3 “Update the DB or DBX.” [6]
Additionally, a GitHub.com repository has been set up with some helpful scripts and guides to
accomplish customization. [7]
Note: Adding boot loader hashes to the DBX may render many Windows install and recovery
images, discs, and removable media drives unbootable. Microsoft provides updated install and
recovery images for Windows 11 and 10. Only update the DBX after acquiring install and
recovery media with the January 2022 or later patch assortment applied (e.g., version 22H1 or
newer).
Warning: The following DBX hashes may be combined with the Secure Boot Customization
steps to revoke trust in select boot loaders vulnerable to Baton Drop. [6] However, more
vulnerable boot loaders exist than the DBX can contain. BlackLotus developers can rapidly
switch to alternate vulnerable boot loaders to evade DBX customization. Mitigating BlackLotus
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
3
NSA | BlackLotus Mitigation Guide
via DBX updates is not recommended. Action 1’s patches and optional mitigations are
recommended instead.
Table: DBX hashes
#
UEFI Secure Boot DBX Hashes
1
B22A7B3CEBB32C80C36EAABB6F77D164AE8B76BF161F423B6E2FBF9DCBC96C02
2
D355041DFBA41F8AE2CE6766ECBC88C93A743FC74F95E7E7AA3EF32CA6E4B390
3
D9F629F6D1D83AC7A15DCB1116E4B9BF128758EC2EA389AA1E0DA3B8F2951150
4
53FCE58746C4B042B101B8682B4E52CE8B620D3C68F69034996E33D3DDDCA1FF
5
F7357DD5000E1FBADBF17CC6025243A243D1BFA705801051119277A30D717B71
6
39C6475B3F00D92EEC049D8F6EFA010CB06F1240ED1CE7E40611278C73817471
7
2E094D21DC457CC4826FCD48395B92DC782F978EEF8210E4B6F5E708527907FF
8
BFE0E68889A750E699788C11F08AFAE940770ED83C1B4A5DB27E10933B29CAD1
4.B. Instructions for Linux infrastructures. Expertly administered and exposed
infrastructures only.
Linux system administrators may forego adding DBX hashes in favor of removing the Microsoft
Windows Production CA 2011 certificate from Secure Boot’s DB. The total number of Baton
Drop-vulnerable boot loaders signed by the key associated with the Production CA’s certificate
is thought to exceed the available DBX memory. Removing the certificate negates the need to
add DBX entries related to Baton Drop and BlackLotus. Linux administrators will still need the
Microsoft Unified Extensible Firmware Interface (UEFI) Third Party Marketplace CA 2011
certificate to utilize Secure Boot with leading Linux distributions. [6]
Do not place the Windows Production CA 2011 certificate in the Machine Owner Key Exclusion
(MOKX) list in lieu of removing it from the DB. Utilizing MOKX in this way will cause the
revoked certificate to still be trusted between firmware initialization and the initialization of
Shim’s Secure Boot extensions.
The Windows Production CA 2011 certificate must be restored if converting the device from
Linux to Windows. Microsoft provides the certificate for download via their resources for
system manufacturers. [9]
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
4
NSA | BlackLotus Mitigation Guide
Frequently asked questions
1. Is BlackLotus a firmware implant?
No. BlackLotus is boot software. The UEFI boot process involves several phases. Execution
control flow transitions from firmware to software following the Boot Device Select phase. [8]
2. Can BlackLotus be removed or quarantined?
Yes, prior to execution. Devices that boot to a BlackLotus EFI binary will need to be completely
reimaged. Attempts to remove BlackLotus following installation result in kernel errors.
3. Does BlackLotus bypass Secure Boot?
An initial bypass is followed by poisoning that configures Secure Boot to trust the malware. An
older, vulnerable boot loader that is trusted by Secure Boot is necessary to strip the Secure
Boot policy from being enforced so that BlackLotus can implant its entire software stack.
Subsequent boots extend the Microsoft UEFI signing ecosystem with a malicious BlackLotus
certificate. Thus, Secure Boot will trust the malware.
4. Which version of Windows is affected?
BlackLotus targets Windows 11 and 10. Variants may exist to target older, UEFI-booting
versions of Windows. Patches are available for Windows 8.1, 10, and 11.
5. Is Linux affected? Is there a version of BlackLotus that targets Linux?
No, not that has been identified at this time. BlackLotus does incorporate some Linux boot
binaries, but the malware targets Windows OS software. No Linux-targeting variant has been
observed.
6. Is BlackLotus really unstoppable?
No – BlackLotus is very stoppable on fully updated Windows endpoints, Secure Bootcustomized devices, or Linux endpoints. Microsoft has released patches and continues to
harden mitigations against BlackLotus and Baton Drop. [1], [3], [4] The Linux community may
remove the Microsoft Windows Production CA 2011 certificate on devices that exclusively boot
Linux. Mitigation options available today will be reinforced by changes to vendor Secure Boot
certificates in the future (some certificates are expiring starting in 2026).
7. Where can I find more public information?
NSA is aware of several technically deep analysis reports posted online from security
researchers and vendors. One thorough source of public information is ESET Security’s blog
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
5
NSA | BlackLotus Mitigation Guide
referenced as [5] in this report. Another source of information is the Microsoft Security
Response Center. [3], [4]
8. Should I reconfigure Secure Boot?
No. Secure Boot is best left enabled in standard mode. Only advanced infrastructures and
expert administrators should engage the custom/user-defined mode. Some security software
may require additional certificates or hashes to be added to the DB allow list or DBX deny list.
No one should disable Secure Boot on an endpoint built within the past 5 years.
9. Can a Trusted Platform Module (TPM) stop BlackLotus?
No. A TPM can only detect BlackLotus. Implant boot binaries are delivered to the EFI boot
partition after the TPM has recorded boot time measurements. Upon the next reboot, the TPM
captures measurements showing a BlackLotus infection. However, a TPM can only detect –
not prevent – implantation as the TPM is an observer and container of integrity indicator data.
A TPM does not have an active enforcement capability.
In a Network Access Control (NAC) infrastructure based on TPM attestation, NAC would
prevent infected machines from accessing protected resources by indicating changes in
Platform Configuration Registers (PCRs) 4-7. NAC also provides an opportunity to remediate
affected endpoints prior to connecting to a protected resource.
10. Can TPM-extended Shim / TrustedShim (T-Shim) stop BlackLotus?
No. T-Shim checks TPM measurements recorded prior to the main boot loader. Secure Boot is
responsible for enforcement following T-Shim.
11. What is Secure Boot customization?
Customization involves one of the following:
Partial customization – augmenting the Microsoft and system vendor Secure Boot
ecosystem with additional DB and DBX entries as necessary to enable signature and
hash checks on unsupported/custom software or block unwanted software.
Full customization – replacing all vendor and Microsoft certificates and hashes with
those generated and selected by the infrastructure owner (requires specialized
knowledge of hardware values).
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
6
NSA | BlackLotus Mitigation Guide
12. How does BlackLotus compare to Boot Hole?
Boot Hole involved flaws in Secure Boot-signed GRUB boot loaders. A configuration file could
be created to cause buffer overflows and arbitrary code execution at boot time. Secure Boot
could be ignored and completely bypassed.
BlackLotus is sophisticated malware observed in the wild. It exploits a flaw (known as Baton
Drop) in Secure Boot-signed copies of the Windows Boot Manager to truncate the Secure Boot
policy values. Instead of stopping due to the lack DB and DBX values, the vulnerable boot
manager allows boot to continue. BlackLotus injects a version of Shim utilizing its own
Machine Owner Key (MOK) – similar to the allow list DB – to vouch for signatures on its own
malicious binaries. The result is Secure Boot remains enforcing while silently poisoned and
permitting malware to execute.
13. Why doesn’t NSA recommend setting up a custom Secure Boot
ecosystem as a mitigation?
NSA has internally piloted efforts to exclusively rely on custom certificates and hashes to
define Secure Boot policy. Pilot efforts have proven effective at preventing threats like
BlackLotus, Baton Drop, BootHole, and similar prior to discovery. However, the administrative
overhead and vendor collaboration necessary represent a resource investment not appropriate
for most enterprise infrastructures. The process of fully customizing Secure Boot is also not
capable of being automated outside of a narrow selection of workstation and server products.
14. Can Trusted eXecution Technology (TXT) stop BlackLotus?
Yes, if and only if the TPM non-volatile memory (NVRAM) policy is set to boot a specific boot
loader. In practice, setting a specific boot loader has caused administrative challenges when
handling updates that affect the EFI boot partition. TXT is not a recommended mitigation given
the likelihood to render endpoints temporarily unbootable.
15. Are virtual machines affected?
Yes. VMs boot into a virtual UEFI environment. BlackLotus targets the OS software boot
loaders that execute following the virtual firmware initialization.
Works cited
[1] Microsoft Security Response Center (2022), January 2022 Security Updates.
https://msrc.microsoft.com/update-guide/releaseNote/2022-Jan
[2] Eclypsium (2020), There’s a Hole in the Boot. https://eclypsium.com/2020/07/29/theres-a-hole-in-the-boot
[3] Microsoft Security Response Center (2023), KB5025885: How to manage the Windows Boot Manager
revocations for Secure Boot changes associated with CVE-2023-24932.
https://support.microsoft.com/help/5025885
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
7
NSA | BlackLotus Mitigation Guide
[4] Microsoft Incident Response (2023), Guidance for investigating attacks using CVE-2022-21894: The
BlackLotus campaign. https://www.microsoft.com/en-us/blog/2023/04/11/guidance-for-investigatingattacks-using-cve-2022-21894-the-blacklotus-campaign
[5] Smolar, Martin (2023), BlackLotus UEFI Bootkit: Myth Confirmed.
https://www.welivesecurity.com/2023/03/01/blacklotus-uefi-bootkit-myth-confirmed
[6] National Security Agency (2020), UEFI Secure Boot Customization [S/N: U/OO/168873-20].
https://media.defense.gov/2020/Sep/15/2002497594/-1/-1/0/CTR-UEFI-SECURE-BOOTCUSTOMIZATION-20200915.PDF/CTR-UEFI-SECURE-BOOT-CUSTOMIZATION-20200915.PDF
[7] National Security Agency (2020), UEFI Secure Boot Customization.
https://github.com/nsacyber/Hardware-and-Firmware-Security-Guidance/tree/master/secureboot
[8] Carnegie Mellon University (2022), UEFI – Terra Firma for Attackers.
https://insights.sei.cmu.edu/blog/uefi-terra-firma-for-attackers/
[9] Microsoft (2022), Windows Secure Boot Key Creation and Management Guidance.
https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-secure-boot-keycreation-and-management-guidance
Disclaimer of endorsement
The information and opinions contained in this document are provided "as is" and without any warranties or guarantees.
Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or
otherwise, does not constitute or imply its endorsement, recommendation, or favoring by the United States Government. This
guidance shall not be used for advertising or product endorsement purposes.
Purpose
This document was developed in furtherance of NSA’s cybersecurity missions, including its responsibilities to identify and
disseminate threats to National Security Systems, Department of Defense, and Defense Industrial Base information systems,
and to develop and issue cybersecurity specifications and mitigations. This information may be shared broadly to reach all
appropriate stakeholders.
Contact
Cybersecurity Report Questions and Feedback: [email protected]
Defense Industrial Base Inquiries and Cybersecurity Services: [email protected]
Media Inquiries / Press Desk: 443-634-0721, [email protected]
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
8
| Please limit your knowledge to the document. Avoid generalizations and ensure accuracy by directly referencing the document's arguments and examples.
What optional mitigations does the NSA recommend for Windows infrastructures against BlackLotus?
National Security Agency | Cybersecurity Information
BlackLotus Mitigation Guide
Executive summary
BlackLotus is a recently publicized malware product garnering significant attention within tech
media. Similar to 2020’s BootHole (CVE-2020-10713), BlackLotus takes advantage of a boot
loader flaw—specifically CVE-2022-21894 Secure Boot bypass known as “Baton Drop”—to
take control of an endpoint from the earliest phase of software boot. Microsoft ® issued patches
for supported versions of Windows to correct boot loader logic. However, patches were not
issued to revoke trust in unpatched boot loaders via the Secure Boot Deny List Database
(DBX). Administrators should not consider the threat fully remediated as boot loaders
vulnerable to Baton Drop are still trusted by Secure Boot.
As described in this Cybersecurity Information Sheet (CSI), NSA recommends infrastructure
owners take action by hardening user executable policies and monitoring the integrity of the
boot partition. An optional advanced mitigation is to customize Secure Boot policy by adding
DBX records to Windows® endpoints or removing the Windows Production CA certificate from
Linux® endpoints.
BlackLotus boot security threat
NSA recognizes significant confusion regarding the threat posed by BlackLotus. Some
organizations use terms like “unstoppable,” “unkillable,” and “unpatchable” to describe the
threat. Other organizations believe there is no threat due to patches that Microsoft released in
January 2022 and early 2023 for supported versions of Windows. [1] The risk exists
somewhere between both extremes.
BlackLotus shares some characteristics with Boot Hole (CVE-2020-10713). [2] Instead of
breaking the Linux boot security chain, BlackLotus targets Windows boot by exploiting a flaw in
older boot loaders—also called boot managers—to set off a chain of malicious actions that
compromise endpoint security. Exploitation of Baton Drop (CVE-2022-21894) allows
BlackLotus to strip the Secure Boot policy and prevent its enforcement. Unlike Boot Hole, the
vulnerable boot loaders have not been added to the Secure Boot DBX revocation list. Because
the vulnerable boot loaders are not listed within the DBX, attackers can substitute fully patched
boot loaders with vulnerable versions to execute BlackLotus.
NSA recommends system administrators within DoD and other networks take action.
BlackLotus is not a firmware threat, but instead targets the earliest software stage of boot.
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
1
NSA | BlackLotus Mitigation Guide
Defensive software solutions can be configured to detect and prevent the installation of the
BlackLotus payload or the reboot event that starts its execution and implantation. NSA
believes that currently published patches could provide a false sense of security for some
infrastructures. Because BlackLotus integrates Shim and GRUB into its implantation routine,
Linux administrators should also be vigilant for variants affecting popular Linux distributions.
Mitigation recommendations
Action 1: Update recovery media and activate optional mitigations
Recommended for all Windows infrastructures. Not applicable to Linux infrastructures.
NSA recommends Windows administrators install the latest security patches for their
endpoints. Microsoft patches from May 2023 contain optional software mitigations to prevent
rollback of the boot manager and kernel to versions vulnerable to Baton Drop and BlackLotus.
The optional mitigations – including a Code Integrity Boot Policy – should be enabled after the
organization has updated its Windows installation, recovery, and diagnostic software to the
latest available versions. [3]
Infrastructure administrators should note that Windows 10 and 11 have applicable security
updates and ongoing mitigation deployments for BlackLotus. Older, unsupported Windows
versions will not receive the full complement of BlackLotus mitigation measures. Windows
infrastructures should migrate to supported versions of Windows if running an unsupported
release. [3]
Action 2: Harden defensive policies
Recommended for all infrastructures. The malware install process for BlackLotus places an
older Windows boot loader Extensible Firmware Interface (EFI) binary into the boot partition,
disables Memory Integrity, disables BitLocker, and reboots the device. Many endpoint security
products (e.g., Endpoint Detection and Response, host-based security suites, user-monitoring
packages) can be configured to block one or more of these events outside of a legitimate,
scheduled update. Configure defensive software to scrutinize changes to the EFI boot partition
in particular. Alternatively, leverage application allow lists to permit only known and trusted
executables.
Action 3: Monitor device integrity measurements and boot configuration
Recommended for most infrastructures. Many endpoint security products and firmware
monitoring tools provide integrity-scanning features. Configure these products and tools to
monitor the composition of the EFI boot partition. Leverage these tools to look for unexpected
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
2
NSA | BlackLotus Mitigation Guide
changes in bootmgfw.efi, bootmgr.efi, or the introduction of additional unexpected EFI binaries
(e.g., shimx64.efi or grubx64.efi). Changes to the boot partition are infrequent and warrant
additional scrutiny.
If unexpected changes are detected within the EFI boot partition, prevent the device from
rebooting. Endpoint and host defensive suites may allow creating rules or triggers that can be
paired with group policies to temporarily restrict reboot. Remediate the boot partition to a
known good state before permitting reboot. A reboot will execute EFI binaries and can implant
BlackLotus.
Microsoft has published specific information regarding the staging of BlackLotus components,
alterations to Windows registry values, and network indicators. Full specifics can be found at
the Microsoft Incident Response blog. [4]
Action 4: Customize UEFI Secure Boot
4.A. Instructions for Windows infrastructures. Expertly administered and exposed
infrastructures only. Not recommended due to limited long-term effectiveness.
BlackLotus relies upon older (pre-January 2022), signed Windows boot loader images to
implant a system. Secure Boot can be updated with DBX deny list hashes that prevent
executing older and vulnerable boot loaders. Public reporting [5] provides indications as to
which boot managers are observed exploited in the wild. In 2020, NSA published "UEFI
Secure Boot Customization" to provide guidance on modifying Secure Boot. Adding DBX
hashes qualifies as a partial customization action covered in section 4 "Customization,"
starting on page 7, and continuing through section 4.4.3 “Update the DB or DBX.” [6]
Additionally, a GitHub.com repository has been set up with some helpful scripts and guides to
accomplish customization. [7]
Note: Adding boot loader hashes to the DBX may render many Windows install and recovery
images, discs, and removable media drives unbootable. Microsoft provides updated install and
recovery images for Windows 11 and 10. Only update the DBX after acquiring install and
recovery media with the January 2022 or later patch assortment applied (e.g., version 22H1 or
newer).
Warning: The following DBX hashes may be combined with the Secure Boot Customization
steps to revoke trust in select boot loaders vulnerable to Baton Drop. [6] However, more
vulnerable boot loaders exist than the DBX can contain. BlackLotus developers can rapidly
switch to alternate vulnerable boot loaders to evade DBX customization. Mitigating BlackLotus
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
3
NSA | BlackLotus Mitigation Guide
via DBX updates is not recommended. Action 1’s patches and optional mitigations are
recommended instead.
Table: DBX hashes
#
UEFI Secure Boot DBX Hashes
1
B22A7B3CEBB32C80C36EAABB6F77D164AE8B76BF161F423B6E2FBF9DCBC96C02
2
D355041DFBA41F8AE2CE6766ECBC88C93A743FC74F95E7E7AA3EF32CA6E4B390
3
D9F629F6D1D83AC7A15DCB1116E4B9BF128758EC2EA389AA1E0DA3B8F2951150
4
53FCE58746C4B042B101B8682B4E52CE8B620D3C68F69034996E33D3DDDCA1FF
5
F7357DD5000E1FBADBF17CC6025243A243D1BFA705801051119277A30D717B71
6
39C6475B3F00D92EEC049D8F6EFA010CB06F1240ED1CE7E40611278C73817471
7
2E094D21DC457CC4826FCD48395B92DC782F978EEF8210E4B6F5E708527907FF
8
BFE0E68889A750E699788C11F08AFAE940770ED83C1B4A5DB27E10933B29CAD1
4.B. Instructions for Linux infrastructures. Expertly administered and exposed
infrastructures only.
Linux system administrators may forego adding DBX hashes in favor of removing the Microsoft
Windows Production CA 2011 certificate from Secure Boot’s DB. The total number of Baton
Drop-vulnerable boot loaders signed by the key associated with the Production CA’s certificate
is thought to exceed the available DBX memory. Removing the certificate negates the need to
add DBX entries related to Baton Drop and BlackLotus. Linux administrators will still need the
Microsoft Unified Extensible Firmware Interface (UEFI) Third Party Marketplace CA 2011
certificate to utilize Secure Boot with leading Linux distributions. [6]
Do not place the Windows Production CA 2011 certificate in the Machine Owner Key Exclusion
(MOKX) list in lieu of removing it from the DB. Utilizing MOKX in this way will cause the
revoked certificate to still be trusted between firmware initialization and the initialization of
Shim’s Secure Boot extensions.
The Windows Production CA 2011 certificate must be restored if converting the device from
Linux to Windows. Microsoft provides the certificate for download via their resources for
system manufacturers. [9]
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
4
NSA | BlackLotus Mitigation Guide
Frequently asked questions
1. Is BlackLotus a firmware implant?
No. BlackLotus is boot software. The UEFI boot process involves several phases. Execution
control flow transitions from firmware to software following the Boot Device Select phase. [8]
2. Can BlackLotus be removed or quarantined?
Yes, prior to execution. Devices that boot to a BlackLotus EFI binary will need to be completely
reimaged. Attempts to remove BlackLotus following installation result in kernel errors.
3. Does BlackLotus bypass Secure Boot?
An initial bypass is followed by poisoning that configures Secure Boot to trust the malware. An
older, vulnerable boot loader that is trusted by Secure Boot is necessary to strip the Secure
Boot policy from being enforced so that BlackLotus can implant its entire software stack.
Subsequent boots extend the Microsoft UEFI signing ecosystem with a malicious BlackLotus
certificate. Thus, Secure Boot will trust the malware.
4. Which version of Windows is affected?
BlackLotus targets Windows 11 and 10. Variants may exist to target older, UEFI-booting
versions of Windows. Patches are available for Windows 8.1, 10, and 11.
5. Is Linux affected? Is there a version of BlackLotus that targets Linux?
No, not that has been identified at this time. BlackLotus does incorporate some Linux boot
binaries, but the malware targets Windows OS software. No Linux-targeting variant has been
observed.
6. Is BlackLotus really unstoppable?
No – BlackLotus is very stoppable on fully updated Windows endpoints, Secure Bootcustomized devices, or Linux endpoints. Microsoft has released patches and continues to
harden mitigations against BlackLotus and Baton Drop. [1], [3], [4] The Linux community may
remove the Microsoft Windows Production CA 2011 certificate on devices that exclusively boot
Linux. Mitigation options available today will be reinforced by changes to vendor Secure Boot
certificates in the future (some certificates are expiring starting in 2026).
7. Where can I find more public information?
NSA is aware of several technically deep analysis reports posted online from security
researchers and vendors. One thorough source of public information is ESET Security’s blog
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
5
NSA | BlackLotus Mitigation Guide
referenced as [5] in this report. Another source of information is the Microsoft Security
Response Center. [3], [4]
8. Should I reconfigure Secure Boot?
No. Secure Boot is best left enabled in standard mode. Only advanced infrastructures and
expert administrators should engage the custom/user-defined mode. Some security software
may require additional certificates or hashes to be added to the DB allow list or DBX deny list.
No one should disable Secure Boot on an endpoint built within the past 5 years.
9. Can a Trusted Platform Module (TPM) stop BlackLotus?
No. A TPM can only detect BlackLotus. Implant boot binaries are delivered to the EFI boot
partition after the TPM has recorded boot time measurements. Upon the next reboot, the TPM
captures measurements showing a BlackLotus infection. However, a TPM can only detect –
not prevent – implantation as the TPM is an observer and container of integrity indicator data.
A TPM does not have an active enforcement capability.
In a Network Access Control (NAC) infrastructure based on TPM attestation, NAC would
prevent infected machines from accessing protected resources by indicating changes in
Platform Configuration Registers (PCRs) 4-7. NAC also provides an opportunity to remediate
affected endpoints prior to connecting to a protected resource.
10. Can TPM-extended Shim / TrustedShim (T-Shim) stop BlackLotus?
No. T-Shim checks TPM measurements recorded prior to the main boot loader. Secure Boot is
responsible for enforcement following T-Shim.
11. What is Secure Boot customization?
Customization involves one of the following:
Partial customization – augmenting the Microsoft and system vendor Secure Boot
ecosystem with additional DB and DBX entries as necessary to enable signature and
hash checks on unsupported/custom software or block unwanted software.
Full customization – replacing all vendor and Microsoft certificates and hashes with
those generated and selected by the infrastructure owner (requires specialized
knowledge of hardware values).
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
6
NSA | BlackLotus Mitigation Guide
12. How does BlackLotus compare to Boot Hole?
Boot Hole involved flaws in Secure Boot-signed GRUB boot loaders. A configuration file could
be created to cause buffer overflows and arbitrary code execution at boot time. Secure Boot
could be ignored and completely bypassed.
BlackLotus is sophisticated malware observed in the wild. It exploits a flaw (known as Baton
Drop) in Secure Boot-signed copies of the Windows Boot Manager to truncate the Secure Boot
policy values. Instead of stopping due to the lack DB and DBX values, the vulnerable boot
manager allows boot to continue. BlackLotus injects a version of Shim utilizing its own
Machine Owner Key (MOK) – similar to the allow list DB – to vouch for signatures on its own
malicious binaries. The result is Secure Boot remains enforcing while silently poisoned and
permitting malware to execute.
13. Why doesn’t NSA recommend setting up a custom Secure Boot
ecosystem as a mitigation?
NSA has internally piloted efforts to exclusively rely on custom certificates and hashes to
define Secure Boot policy. Pilot efforts have proven effective at preventing threats like
BlackLotus, Baton Drop, BootHole, and similar prior to discovery. However, the administrative
overhead and vendor collaboration necessary represent a resource investment not appropriate
for most enterprise infrastructures. The process of fully customizing Secure Boot is also not
capable of being automated outside of a narrow selection of workstation and server products.
14. Can Trusted eXecution Technology (TXT) stop BlackLotus?
Yes, if and only if the TPM non-volatile memory (NVRAM) policy is set to boot a specific boot
loader. In practice, setting a specific boot loader has caused administrative challenges when
handling updates that affect the EFI boot partition. TXT is not a recommended mitigation given
the likelihood to render endpoints temporarily unbootable.
15. Are virtual machines affected?
Yes. VMs boot into a virtual UEFI environment. BlackLotus targets the OS software boot
loaders that execute following the virtual firmware initialization.
Works cited
[1] Microsoft Security Response Center (2022), January 2022 Security Updates.
https://msrc.microsoft.com/update-guide/releaseNote/2022-Jan
[2] Eclypsium (2020), There’s a Hole in the Boot. https://eclypsium.com/2020/07/29/theres-a-hole-in-the-boot
[3] Microsoft Security Response Center (2023), KB5025885: How to manage the Windows Boot Manager
revocations for Secure Boot changes associated with CVE-2023-24932.
https://support.microsoft.com/help/5025885
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
7
NSA | BlackLotus Mitigation Guide
[4] Microsoft Incident Response (2023), Guidance for investigating attacks using CVE-2022-21894: The
BlackLotus campaign. https://www.microsoft.com/en-us/blog/2023/04/11/guidance-for-investigatingattacks-using-cve-2022-21894-the-blacklotus-campaign
[5] Smolar, Martin (2023), BlackLotus UEFI Bootkit: Myth Confirmed.
https://www.welivesecurity.com/2023/03/01/blacklotus-uefi-bootkit-myth-confirmed
[6] National Security Agency (2020), UEFI Secure Boot Customization [S/N: U/OO/168873-20].
https://media.defense.gov/2020/Sep/15/2002497594/-1/-1/0/CTR-UEFI-SECURE-BOOTCUSTOMIZATION-20200915.PDF/CTR-UEFI-SECURE-BOOT-CUSTOMIZATION-20200915.PDF
[7] National Security Agency (2020), UEFI Secure Boot Customization.
https://github.com/nsacyber/Hardware-and-Firmware-Security-Guidance/tree/master/secureboot
[8] Carnegie Mellon University (2022), UEFI – Terra Firma for Attackers.
https://insights.sei.cmu.edu/blog/uefi-terra-firma-for-attackers/
[9] Microsoft (2022), Windows Secure Boot Key Creation and Management Guidance.
https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-secure-boot-keycreation-and-management-guidance
Disclaimer of endorsement
The information and opinions contained in this document are provided "as is" and without any warranties or guarantees.
Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or
otherwise, does not constitute or imply its endorsement, recommendation, or favoring by the United States Government. This
guidance shall not be used for advertising or product endorsement purposes.
Purpose
This document was developed in furtherance of NSA’s cybersecurity missions, including its responsibilities to identify and
disseminate threats to National Security Systems, Department of Defense, and Defense Industrial Base information systems,
and to develop and issue cybersecurity specifications and mitigations. This information may be shared broadly to reach all
appropriate stakeholders.
Contact
Cybersecurity Report Questions and Feedback: [email protected]
Defense Industrial Base Inquiries and Cybersecurity Services: [email protected]
Media Inquiries / Press Desk: 443-634-0721, [email protected]
U/OO/167397-23 | PP-23-1628 | JUN 2023 Ver. 1.0
8
|
"================
<TEXT PASSAGE>
=======
[context document]
================
<QUESTION>
=======
[user request]
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." | Hello, I am looking to organize this article so that I can write a report on it later. To start, I would like you to list out in point form all the activities that Charlie was involved in. | Story image
#
Advanced Persistent Threat Protection
#
Cybersecurity
#
EDR
Sophos unveils Chinese cyber espionage tactics in new report
Sophos has unveiled the latest developments in a Chinese cyber espionage campaign in Southeast Asia, as detailed in its report titled “Crimson Palace: New Tools, Tactics, Targets.”
The research conducted by Sophos X-Ops reveals three clusters of nation-state activity - named Cluster Alpha, Cluster Bravo, and Cluster Charlie - inside a high-profile government organisation. These clusters have continued their activities over the nearly two-year-long campaign.
The report notes a renewed presence of both Cluster Bravo and Cluster Charlie, not only within the initial targeted organisation but also across multiple other entities in the region. An important discovery made during this process is a novel keylogger dubbed “Tattletale.” According to the report, this keylogger impersonates users, collecting information related to password policies, security settings, cached passwords, browser information, and storage data.
Paul Jaramillo, director of threat hunting and threat intelligence at Sophos, commented on the adaptive capabilities of these threat actors. “We’ve been in an ongoing chess match with these adversaries. During the initial phases of the operation, Cluster Charlie was deploying various bespoke tools and malware,” he explained. “However, we were able to ‘burn’ much of their previous infrastructure, blocking their Command and Control (C2) tools and forcing them to pivot. This is good; however, their switch to open-source tools demonstrates just how quickly these attacker groups can adapt and remain persistent.”
During its initial activity phase from March to August 2023, Cluster Charlie operated within a high-level government organisation. After a brief hiatus, the cluster re-emerged in September 2023 and continued its operations until at least May 2024. In this second phase, the group aimed to evade endpoint detection and response (EDR) tools while gathering more intelligence. The report suggests that the overarching organisation directing these clusters has shifted tactics, increasingly using open-source tools instead of custom-developed malware.
Sophos X-Ops has tracked ongoing Cluster Charlie activities across multiple organisations in Southeast Asia. Cluster Bravo, originally active for three weeks in March 2023, reappeared in January 2024 and targeted at least 11 other organisations in the region. Both Cluster Bravo and Cluster Charlie share tactics, techniques, and procedures (TTPs) with known Chinese threat groups Earth Longzhi and Unfading Sea Haze, respectively, indicating coordination among these clusters.
Jaramillo noted the increasing coordination and expansion of operations among the clusters. “Not only are we seeing all three of the ‘Crimson Palace’ clusters refine and coordinate their tactics, but they’re also expanding their operations, attempting to infiltrate other targets in Southeast Asia. Given how frequently Chinese nation-state groups share infrastructure and tools, and the fact that Cluster Bravo and Cluster Charlie are moving beyond the original target, we will likely continue to see this campaign evolve - and potentially new locations. We will be monitoring it closely,” he said.
Operation Crimson Palace highlights the ongoing threat posed by sophisticated cyber espionage activities targeting critical sectors. Sophos' continuous monitoring and research efforts serve to identify and mitigate these threats, providing early detection and bolstering the security infrastructure of its partners and clients. | "================
<TEXT PASSAGE>
=======
Story image
#
Advanced Persistent Threat Protection
#
Cybersecurity
#
EDR
Sophos unveils Chinese cyber espionage tactics in new report
Sophos has unveiled the latest developments in a Chinese cyber espionage campaign in Southeast Asia, as detailed in its report titled “Crimson Palace: New Tools, Tactics, Targets.”
The research conducted by Sophos X-Ops reveals three clusters of nation-state activity - named Cluster Alpha, Cluster Bravo, and Cluster Charlie - inside a high-profile government organisation. These clusters have continued their activities over the nearly two-year-long campaign.
The report notes a renewed presence of both Cluster Bravo and Cluster Charlie, not only within the initial targeted organisation but also across multiple other entities in the region. An important discovery made during this process is a novel keylogger dubbed “Tattletale.” According to the report, this keylogger impersonates users, collecting information related to password policies, security settings, cached passwords, browser information, and storage data.
Paul Jaramillo, director of threat hunting and threat intelligence at Sophos, commented on the adaptive capabilities of these threat actors. “We’ve been in an ongoing chess match with these adversaries. During the initial phases of the operation, Cluster Charlie was deploying various bespoke tools and malware,” he explained. “However, we were able to ‘burn’ much of their previous infrastructure, blocking their Command and Control (C2) tools and forcing them to pivot. This is good; however, their switch to open-source tools demonstrates just how quickly these attacker groups can adapt and remain persistent.”
During its initial activity phase from March to August 2023, Cluster Charlie operated within a high-level government organisation. After a brief hiatus, the cluster re-emerged in September 2023 and continued its operations until at least May 2024. In this second phase, the group aimed to evade endpoint detection and response (EDR) tools while gathering more intelligence. The report suggests that the overarching organisation directing these clusters has shifted tactics, increasingly using open-source tools instead of custom-developed malware.
Sophos X-Ops has tracked ongoing Cluster Charlie activities across multiple organisations in Southeast Asia. Cluster Bravo, originally active for three weeks in March 2023, reappeared in January 2024 and targeted at least 11 other organisations in the region. Both Cluster Bravo and Cluster Charlie share tactics, techniques, and procedures (TTPs) with known Chinese threat groups Earth Longzhi and Unfading Sea Haze, respectively, indicating coordination among these clusters.
Jaramillo noted the increasing coordination and expansion of operations among the clusters. “Not only are we seeing all three of the ‘Crimson Palace’ clusters refine and coordinate their tactics, but they’re also expanding their operations, attempting to infiltrate other targets in Southeast Asia. Given how frequently Chinese nation-state groups share infrastructure and tools, and the fact that Cluster Bravo and Cluster Charlie are moving beyond the original target, we will likely continue to see this campaign evolve - and potentially new locations. We will be monitoring it closely,” he said.
Operation Crimson Palace highlights the ongoing threat posed by sophisticated cyber espionage activities targeting critical sectors. Sophos' continuous monitoring and research efforts serve to identify and mitigate these threats, providing early detection and bolstering the security infrastructure of its partners and clients.
https://securitybrief.asia/story/sophos-unveils-chinese-cyber-espionage-tactics-in-new-report
================
<QUESTION>
=======
Hello, I am looking to organize this article so that I can write a report on it later. To start, I would like you to list out in point form all the activities that Charlie was involved in.
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." |
Your response to the prompt shall consist exclusively of information contained within the context block with no use of external sources. Your responses shall be between 150 and 200 words and shall not be presented in a list format of any sort. | Please give a short summary of the battery systems of the Mars Helicopter. | F. Telecommunication System
Once separated from the host spacecraft (lander or rover), the Mars Helicopter can only communicate to or be
commanded from Earth via radio link. This link is implemented using a COTS 802.15.4 (Zig-Bee) standard 900 MHz
chipset, SiFlex 02, originally manufactured by LS Research. Two identical SiFlex parts are used, one of which is an
integral part of a base station mounted on the host spacecraft, the other being included in the helicopter electronics.
These radios are mounted on identical, custom PC boards which provide mechanical support, power, heat distribution,
and other necessary infrastructure. The boards on each side of the link are connected to their respective custom antennas.
The helicopter antenna is a loaded quarter wave monopole positioned near the center of the solar panel (which also
serves as ground plane) at the top of the entire helicopter assembly and is fed through a miniature coaxial cable routed
through the mast to the electronics below. The radio is configured and exchanges data with the helicopter and base
station system computers via UART.
One challenge in using off-the-shelf assemblies for electronics systems to be used on Mars is the low temperatures
expected on the surface. At night, the antenna and cable assemblies will see temperatures as low as −140 C. Electronics
assemblies on both base station and helicopter will be kept “warm” (not below −15 C) by heaters as required. Another
challenge is antenna placement and accommodation on the larger host spacecraft. Each radio emits approximately
0.75 W power at 900 MHz with the board consuming up to 3 W supply power when transmitting and approximately
0.15 W while receiving. The link is designed to relay data at over-the-air rates of 20 kbps or 250 kbps over distances of
up to 1000 m.
A one-way data transmission mode is used to recover data from the helicopter in real time during its brief sorties.
When landed, a secure two-way mode is used. Due to protocol overhead and channel management, a maximum return
throughput in flight of 200 kbps is expected while two-way throughputs as low as 10 kbps are supported if required by
marginal, landed circumstances.
G. Power & Energy System
The helicopter is powered by a Li-Ion battery system that is recharged daily by a solar panel. The energy in the
battery is used for operating heaters to survive the cold Martian nights as well as operate the helicopter actuators and
avionics during short flights lasting from 90 seconds to a few minutes. Depending on the latitude of operations and the
Martian season, recharging of this battery through the solar panel could occur over one to multiple sols (Martian days).
The helicopter battery shown in Fig. 12 consists of 6 Sony SE US1865o VTC4 Li-ion cells with a nameplate capacity
of 2 Ah. The maximum discharge rate is greater than 25 A amd the maximum cell voltage specified by the manufacturer
is 4.25 V. The continuous tested power load capability of this batterry is 480 W with a peak power capability of
510 W. Battery voltage is in the range of 15–25.2 V and the total mass of the 6 cells is 273 g. A cell balancing charge
management system controlled by the FPGA ensures that the all the individual cells are at a uniform voltage.
A de-rated end-of-life battery capacity of 35.75 Wh is available for use. Of this capacity, 10.73 Wh (30%) is kept as
reserve, night-time survival energy usage is estimated at 21 Wh for typical operation in the northern latitudes in the
spring season, and approximatley 10 Wh is available for flight. Assuming that 20% of the power is at the peak load of
510 W and 80% is at a continuous load of 360 W, approximately 90 sec of flight is possible. These energy projections
represent conservative worst-case end-of-mision battery performance at 0 C initial temperature. More moderate power
loads will extend the flight time.
The solar panel is made from Inverted Metamorphic (IMM4J) cells from SolAero Technologies. The cells are
optimized for the Mars solar spectrum and occupy a rectangular area with 680 cm2 of substrate (544 cm2
active cell
area) in a region centered and immediately above the co-axial rotors. This region minimally interferes with the flow
through the rotor.
H. Thermal System
The helicopter must survive the cold of the night on Mars where temperatures can drop to -100 C or lower. The
most critical component is the battery which is kept above -15 C through the night as it powers Kapton film heaters
attached to the battery cells. The avionics boards in the ECM surround the battery and are also kept at an elevated
temperature by virtue of their proximity to the warm battery assembly. Insulation around the avionics boards is provided
by a carbon-dioxide gap of 3 cm width. Additional insulation can be provided by replacing the carbon-dioxide gas with
an Aerogel formulation. The outermost fuselage thermal coating is from Sheldahl with Solar absorptivity α = 0.8 and
infra-red (IR) emissivity = 0.1.
In addition to thermal losses through the gas gap (or aerogel), additional losses occur due to conduction in the mast
as well as through the copper wiring that penetrate the ECM from the mast. To minimize the latter, the wire gauges are
selected to be of the thinnest gauges that can still support the current draw during operations without overheating.
Prior to flight, under the control of the FPGA, the thermal system powers on heaters in the motor control boards that
have been exposed to the ambient temperatures. The internal battery temperature is brought up to 5 C to allow hi-power
energy extraction from the cells. During operation the ECM and battery warm up as a result of avionics operations and
battery self-heating. However, the thermal inertia of the elements is such that for the short flights of the helicopter, there
is no overheating. | Your response to the prompt shall consist exclusively of information contained within the context block with no use of external sources. Your responses shall be between 150 and 200 words and shall not be presented in a list format of any sort.
F. Telecommunication System
Once separated from the host spacecraft (lander or rover), the Mars Helicopter can only communicate to or be
commanded from Earth via radio link. This link is implemented using a COTS 802.15.4 (Zig-Bee) standard 900 MHz
chipset, SiFlex 02, originally manufactured by LS Research. Two identical SiFlex parts are used, one of which is an
integral part of a base station mounted on the host spacecraft, the other being included in the helicopter electronics.
These radios are mounted on identical, custom PC boards which provide mechanical support, power, heat distribution,
and other necessary infrastructure. The boards on each side of the link are connected to their respective custom antennas.
The helicopter antenna is a loaded quarter wave monopole positioned near the center of the solar panel (which also
serves as ground plane) at the top of the entire helicopter assembly and is fed through a miniature coaxial cable routed
through the mast to the electronics below. The radio is configured and exchanges data with the helicopter and base
station system computers via UART.
One challenge in using off-the-shelf assemblies for electronics systems to be used on Mars is the low temperatures
expected on the surface. At night, the antenna and cable assemblies will see temperatures as low as −140 C. Electronics
assemblies on both base station and helicopter will be kept “warm” (not below −15 C) by heaters as required. Another
challenge is antenna placement and accommodation on the larger host spacecraft. Each radio emits approximately
0.75 W power at 900 MHz with the board consuming up to 3 W supply power when transmitting and approximately
0.15 W while receiving. The link is designed to relay data at over-the-air rates of 20 kbps or 250 kbps over distances of
up to 1000 m.
A one-way data transmission mode is used to recover data from the helicopter in real time during its brief sorties.
When landed, a secure two-way mode is used. Due to protocol overhead and channel management, a maximum return
throughput in flight of 200 kbps is expected while two-way throughputs as low as 10 kbps are supported if required by
marginal, landed circumstances.
G. Power & Energy System
The helicopter is powered by a Li-Ion battery system that is recharged daily by a solar panel. The energy in the
battery is used for operating heaters to survive the cold Martian nights as well as operate the helicopter actuators and
avionics during short flights lasting from 90 seconds to a few minutes. Depending on the latitude of operations and the
Martian season, recharging of this battery through the solar panel could occur over one to multiple sols (Martian days).
The helicopter battery shown in Fig. 12 consists of 6 Sony SE US1865o VTC4 Li-ion cells with a nameplate capacity
of 2 Ah. The maximum discharge rate is greater than 25 A amd the maximum cell voltage specified by the manufacturer
is 4.25 V. The continuous tested power load capability of this batterry is 480 W with a peak power capability of
510 W. Battery voltage is in the range of 15–25.2 V and the total mass of the 6 cells is 273 g. A cell balancing charge
management system controlled by the FPGA ensures that the all the individual cells are at a uniform voltage.
A de-rated end-of-life battery capacity of 35.75 Wh is available for use. Of this capacity, 10.73 Wh (30%) is kept as
reserve, night-time survival energy usage is estimated at 21 Wh for typical operation in the northern latitudes in the
spring season, and approximatley 10 Wh is available for flight. Assuming that 20% of the power is at the peak load of
510 W and 80% is at a continuous load of 360 W, approximately 90 sec of flight is possible. These energy projections
represent conservative worst-case end-of-mision battery performance at 0 C initial temperature. More moderate power
loads will extend the flight time.
The solar panel is made from Inverted Metamorphic (IMM4J) cells from SolAero Technologies. The cells are
optimized for the Mars solar spectrum and occupy a rectangular area with 680 cm2 of substrate (544 cm2
active cell
area) in a region centered and immediately above the co-axial rotors. This region minimally interferes with the flow
through the rotor.
H. Thermal System
The helicopter must survive the cold of the night on Mars where temperatures can drop to -100 C or lower. The
most critical component is the battery which is kept above -15 C through the night as it powers Kapton film heaters
attached to the battery cells. The avionics boards in the ECM surround the battery and are also kept at an elevated
temperature by virtue of their proximity to the warm battery assembly. Insulation around the avionics boards is provided
by a carbon-dioxide gap of 3 cm width. Additional insulation can be provided by replacing the carbon-dioxide gas with
an Aerogel formulation. The outermost fuselage thermal coating is from Sheldahl with Solar absorptivity α = 0.8 and
infra-red (IR) emissivity = 0.1.
In addition to thermal losses through the gas gap (or aerogel), additional losses occur due to conduction in the mast
as well as through the copper wiring that penetrate the ECM from the mast. To minimize the latter, the wire gauges are
selected to be of the thinnest gauges that can still support the current draw during operations without overheating.
Prior to flight, under the control of the FPGA, the thermal system powers on heaters in the motor control boards that
have been exposed to the ambient temperatures. The internal battery temperature is brought up to 5 C to allow hi-power
energy extraction from the cells. During operation the ECM and battery warm up as a result of avionics operations and
battery self-heating. However, the thermal inertia of the elements is such that for the short flights of the helicopter, there
is no overheating.
Please give a short summary of the battery systems of the Mars Helicopter. |
Draw your answer only from information within the text provided. Ensure that the response explains any terms that may be industry or product-specific. | Paraphrase this article. | Fiber-optic communications was born at a time whenthe telecommunications industry had grown cautious and conservative after making telephone service ubiquitous in the United States and widely available in other developed countries. The backbones of the long distance telephone network were chains of microwave relay towers, which engineers had planned to replace by buried pipelines carrying millimeter waves in the 60-GHz range, starting in the 1970s. Bell Telephone Laboratories were quick to begin research on optical communications after the invention of the laser, but they spent the 1960s studying beam transmission through buried hollow confocal waveguides, expecting laser communications to be the next generation after the millimeter waveguide, on a technology timetable spanning decades. Corning’s invention of the low-loss fiber in 1970 changed all that. Bell abandoned the hollow optical guide in 1972 and never put any millimeter waveguide into commercial service after completing a field test in the mid-1970s. But telephone engineers remained wary of installing fiber without exhaustive tests and field trials. Bell engineers developed and exhaustively tested the first generation of fiber-optic systems, based on multimode graded-index fibers transmitting 45 Mb/s at 850 nm over spans of 10 km, connecting local telephone central offices. Deployment began slowly in the late 1970s, and soon a second fiber window opened at 1300nm, allowing a doubling of speed and transmission distance. In 1980, AT&T announced plans to extend multimode fiber into its long-haul network, by laying a 144-fiber cable between Boston and Washington with repeaters spaced every 7 km along an existing right of way. Yet by then change was accelerating in the no-longer stodgy telecommunications industry. Two crucial choices in system design and the breakup of AT&T were about to launch the modern fiber-optic communications industry. In 1980, Bell Labs announced that the next generation of transoceanic telephone cables would use single-mode fiber instead of the copper coaxial cables used since the first transatlantic phone cable in 1956. In 1982, the upstart MCI Communications picked single-mode fiber as the backbone of its new North American longdistance phone network, replacing the microwave towers that gave the company its original name, Microwave Communications Inc. That same year, AT&T agreed to divest its seven regional telephone companies to focus on long-distance service, computing, and communications hardware. The submarine fiber decision was a bold bet on a new technology based on desperation. Regulators had barred AT&T from operating communication satellites since the mid-1960s. Coax had reached its practical limit for intercontinental cables. Only single-mode fiber transmitting at 1310 nm could transmit 280 Mb/s through 50-km spans stretching more than 6000 km across the Atlantic. AT&T and its partners British Telecom and France Telecom set a target of 1988 for installing TAT-8, the first transatlantic fiber cable. More submarine fiber cables would follow. In 1982, MCI went looking for new technology to upgrade its long-distance phone network. Visits to British Telecom Research Labs and Japanese equipment makers convinced them that single-mode fiber transmitting 400 Mb/s at 1310 nm was ready for installation. AT&T and Sprint soon followed, with Sprint ads promoting the new fiber technology by claiming that callers could hear a pin drop over it. Fueled by the breakup of AT&T and intense competition for long-distance telephone service, fiber sales boomed as new long-haul networks were installed, then slumped briefly after their completion. The switch to single-mode fiber opened the room to further system improvements. By 1987, terrestrial long-distance backbone systems were carrying 800 Mb/s, and systems able to transmit 1.7 Gb/s were in development. Long-distance traffic increased as competition reduced long-distance rates, and developers pushed for the next transmission milestone of 2.5 Gb/s. Telecommunications was becoming an important part of the laser and optics market, pushing development of products including diode lasers, receivers, and optical connectors. Fiber optics had shifted the telephone industry into overdrive. Two more technological revolutions in their early stages in the late 1980s would soon shift telecommunications to warp speed. One came from the optical world, the fiber amplifier. The other came from telecommunications—the Internet. Even inthe late 1980s, the bulk of telecommunications traffic consisted of telephone conversations. (Cable television networks carried analog signals and were separate from the usual world of telecommunications.) Telephony was a mature industry, with traffic volume growing about 10% a year. Fiber traffic was increasing faster than that because fiber was displacing older technologies including microwave relays and geosynchronous communication satellites. Telecommunications networks also carried some digital data, but the overall volume was small. The ideas that laid the groundwork for the Internet date back to the late 1960s. Universities began installing terminals so students and faculty could access mainframe computers, ARPANET began operations to connect universities, and telephone companies envisioned linking home users to mainframes through telephone wiring. Special terminals were hooked to television screens for early home information services called videotex. But those data services attracted few customers, and data traffic remained limited until the spread of personal computers in the 1980s. The first personal computer modems sent 300 bits/s through phone lines, a number that soon rose to 1200 bits/s. Initially the Internet was limited to academic and government users, so other PC users accessed private networks such as CompuServe and America Online, but private Internet accounts became available by 1990. The World Wide Web was launched in 1991 at the European Center for Nuclear Research (CERN)and initially grew slowly. Butin 1994the numberofserverssoaredfrom500 to 10,000, and the data floodgates were loosed. Digital traffic soared. By good fortune, the global fiber-optic backbone network was already in place as data traffic started to soar. Construction expenses are a major part of network costs, so multi-fiber cables were laid that in the mid-1980s were thought to be adequate to support many years of normal traffic growth. That kept the “Information Superhighway” from becoming a global traffic jam as data traffic took off. The impact of fiber is evident in Fig. 1, a chart presented by Donald Keck during his 2011 CLEO plenary talk. Diverse new technologies had increased data transmission rates since 1850. Fiber optics became the dominant technology after 1980 and is responsible for the change in slope of the data-rate growth. Even morefortunately, Internet traffic was growing in phase with the development of a vital new optical technology, the optical fiber amplifier. Early efforts to develop all-optical amplifiers focused on semiconductor sources, because they could be easily matched to signal wavelengths, but experiments in the middle to late 1980s found high noise levels. Attention turned to fiber amplifiers after David Payne demonstrated the first erbium-doped fiber amplifier in 1987. (See Digonnet’s chapter on p. 195.) Elias Snitzer had demonstrated a neodymium-doped optical amplifier at American Optical in 1964, but it had not caught on because it required flashlamp pumping. Erbium was the right material at the right time. Its gain band fell in the 1550-nm window where optical fibers have minimum attenuation. Within a couple of years, British Telecom Labs had identified a diode-laser pump band at 980 nm and Snitzer, then at Polaroid, had found another at 1480 nm. By 1989, diode-pumped fiber amplifiers looked like good replacements for cumbersome electro-optic repeaters. What launched the bandwidth revolution was the ability of fiber amplifiers to handle wavelength division multiplexed signals. The first tests started with only a few wavelengths and a single amplifier; then developers added more wavelengths and additional amplifiers. The good news was that wavelength-division multiplexing (WDM) multiplied capacity by the number of channels that could be squeezed into the transmission band. The bad news was that WDM also multiplied the number of potential complications. | System Instructions: Draw your answer only from information within the text provided. Ensure that the response explains any terms that may be industry or product-specific.
Question: Paraphrase this article.
Context: Fiber-optic communications was born at a time whenthe telecommunications industry had grown cautious and conservative after making telephone service ubiquitous in the United States and widely available in other developed countries. The backbones of the long distance telephone network were chains of microwave relay towers, which engineers had planned to replace by buried pipelines carrying millimeter waves in the 60-GHz range, starting in the 1970s. Bell Telephone Laboratories were quick to begin research on optical communications after the invention of the laser, but they spent the 1960s studying beam transmission through buried hollow confocal waveguides, expecting laser communications to be the next generation after the millimeter waveguide, on a technology timetable spanning decades. Corning’s invention of the low-loss fiber in 1970 changed all that. Bell abandoned the hollow optical guide in 1972 and never put any millimeter waveguide into commercial service after completing a field test in the mid-1970s. But telephone engineers remained wary of installing fiber without exhaustive tests and field trials. Bell engineers developed and exhaustively tested the first generation of fiber-optic systems, based on multimode graded-index fibers transmitting 45 Mb/s at 850 nm over spans of 10 km, connecting local telephone central offices. Deployment began slowly in the late 1970s, and soon a second fiber window opened at 1300nm, allowing a doubling of speed and transmission distance. In 1980, AT&T announced plans to extend multimode fiber into its long-haul network, by laying a 144-fiber cable between Boston and Washington with repeaters spaced every 7 km along an existing right of way. Yet by then change was accelerating in the no-longer stodgy telecommunications industry. Two crucial choices in system design and the breakup of AT&T were about to launch the modern fiber-optic communications industry. In 1980, Bell Labs announced that the next generation of transoceanic telephone cables would use single-mode fiber instead of the copper coaxial cables used since the first transatlantic phone cable in 1956. In 1982, the upstart MCI Communications picked single-mode fiber as the backbone of its new North American longdistance phone network, replacing the microwave towers that gave the company its original name, Microwave Communications Inc. That same year, AT&T agreed to divest its seven regional telephone companies to focus on long-distance service, computing, and communications hardware. The submarine fiber decision was a bold bet on a new technology based on desperation. Regulators had barred AT&T from operating communication satellites since the mid-1960s. Coax had reached its practical limit for intercontinental cables. Only single-mode fiber transmitting at 1310 nm could transmit 280 Mb/s through 50-km spans stretching more than 6000 km across the Atlantic. AT&T and its partners British Telecom and France Telecom set a target of 1988 for installing TAT-8, the first transatlantic fiber cable. More submarine fiber cables would follow. In 1982, MCI went looking for new technology to upgrade its long-distance phone network. Visits to British Telecom Research Labs and Japanese equipment makers convinced them that single-mode fiber transmitting 400 Mb/s at 1310 nm was ready for installation. AT&T and Sprint soon followed, with Sprint ads promoting the new fiber technology by claiming that callers could hear a pin drop over it. Fueled by the breakup of AT&T and intense competition for long-distance telephone service, fiber sales boomed as new long-haul networks were installed, then slumped briefly after their completion. The switch to single-mode fiber opened the room to further system improvements. By 1987, terrestrial long-distance backbone systems were carrying 800 Mb/s, and systems able to transmit 1.7 Gb/s were in development. Long-distance traffic increased as competition reduced long-distance rates, and developers pushed for the next transmission milestone of 2.5 Gb/s. Telecommunications was becoming an important part of the laser and optics market, pushing development of products including diode lasers, receivers, and optical connectors. Fiber optics had shifted the telephone industry into overdrive. Two more technological revolutions in their early stages in the late 1980s would soon shift telecommunications to warp speed. One came from the optical world, the fiber amplifier. The other came from telecommunications—the Internet. Even inthe late 1980s, the bulk of telecommunications traffic consisted of telephone conversations. (Cable television networks carried analog signals and were separate from the usual world of telecommunications.) Telephony was a mature industry, with traffic volume growing about 10% a year. Fiber traffic was increasing faster than that because fiber was displacing older technologies including microwave relays and geosynchronous communication satellites. Telecommunications networks also carried some digital data, but the overall volume was small. The ideas that laid the groundwork for the Internet date back to the late 1960s. Universities began installing terminals so students and faculty could access mainframe computers, ARPANET began operations to connect universities, and telephone companies envisioned linking home users to mainframes through telephone wiring. Special terminals were hooked to television screens for early home information services called videotex. But those data services attracted few customers, and data traffic remained limited until the spread of personal computers in the 1980s. The first personal computer modems sent 300 bits/s through phone lines, a number that soon rose to 1200 bits/s. Initially the Internet was limited to academic and government users, so other PC users accessed private networks such as CompuServe and America Online, but private Internet accounts became available by 1990. The World Wide Web was launched in 1991 at the European Center for Nuclear Research (CERN)and initially grew slowly. Butin 1994the numberofserverssoaredfrom500 to 10,000, and the data floodgates were loosed. Digital traffic soared. By good fortune, the global fiber-optic backbone network was already in place as data traffic started to soar. Construction expenses are a major part of network costs, so multi-fiber cables were laid that in the mid-1980s were thought to be adequate to support many years of normal traffic growth. That kept the “Information Superhighway” from becoming a global traffic jam as data traffic took off. The impact of fiber is evident in Fig. 1, a chart presented by Donald Keck during his 2011 CLEO plenary talk. Diverse new technologies had increased data transmission rates since 1850. Fiber optics became the dominant technology after 1980 and is responsible for the change in slope of the data-rate growth. Even morefortunately, Internet traffic was growing in phase with the development of a vital new optical technology, the optical fiber amplifier. Early efforts to develop all-optical amplifiers focused on semiconductor sources, because they could be easily matched to signal wavelengths, but experiments in the middle to late 1980s found high noise levels. Attention turned to fiber amplifiers after David Payne demonstrated the first erbium-doped fiber amplifier in 1987. (See Digonnet’s chapter on p. 195.) Elias Snitzer had demonstrated a neodymium-doped optical amplifier at American Optical in 1964, but it had not caught on because it required flashlamp pumping. Erbium was the right material at the right time. Its gain band fell in the 1550-nm window where optical fibers have minimum attenuation. Within a couple of years, British Telecom Labs had identified a diode-laser pump band at 980 nm and Snitzer, then at Polaroid, had found another at 1480 nm. By 1989, diode-pumped fiber amplifiers looked like good replacements for cumbersome electro-optic repeaters. What launched the bandwidth revolution was the ability of fiber amplifiers to handle wavelength division multiplexed signals. The first tests started with only a few wavelengths and a single amplifier; then developers added more wavelengths and additional amplifiers. The good news was that wavelength-division multiplexing (WDM) multiplied capacity by the number of channels that could be squeezed into the transmission band. The bad news was that WDM also multiplied the number of potential complications. |
You may only respond using the information provided in the context block. Bold any statute numbers. | Explain how 17 U.S.C. Section 112 affects radio stations in layman's terms | Who pays whom, as well as who can sue whom for copyright infringement, depends in part on
the mode of listening to music. Rights owners of sound recordings (e.g., record labels) pay music
publishers for the right to record and distribute the publishers’ musical works in a physical
format.27 Retail outlets that sell digital files or physical reproductions of sound recordings pay the
distribution subsidiaries of major record labels, which act as wholesalers.28 Purchasers of a
lawfully made reproduction, such as a compact disc or a digital file, may listen to that song as
often as they wish in a private setting.
Radio listeners have less control over when and where they listen to a song than they would if
they purchased the song outright. The Copyright Act does not require broadcast radio stations to
pay public performance royalties to record labels and artists, but it does require them to pay
public performance royalties to music publishers and songwriters for the use of notes and lyrics in
broadcast music.
In addition to mechanical rights, digital services must pay both record labels and music publishers
for public performance rights. Both traditional broadcast radio stations and music streaming
services that limit the ability of users to choose which songs they hear next (noninteractive
services) may make temporary reproductions of songs in the normal course of transmitting music
to listeners.29 The rights to make these temporary reproductions, known as “ephemeral
reproductions,” fall under 17 U.S.C. Section 112 (see “Ephemeral Reproductions”).
Users of an “on demand” or “interactive” music streaming service can listen to songs upon
request, an experience similar in some ways to playing a compact disc and in other ways to
listening to a radio broadcast. To enable multiple listeners to select songs, the services download
digital files to consumers’ devices. These digital reproductions are known as “conditional” or
“tethered” downloads, because consumers’ ability to listen to them upon request is conditioned
upon remaining subscribers to the interactive services.30 The services pay royalties to music
publishers or songwriters for the right to reproduce and distribute the musical works, and pay
royalties to record labels or artists for the right to reproduce and distribute sound recordings.31
Thus, while record labels reproduce and distribute their own sound recordings in physical form, they license such rights to interactive digital streaming services when the music is in digital
form. | System instructions: [You may only respond using the information provided in the context block. Bold any statute numbers. ]
Content block: [Who pays whom, as well as who can sue whom for copyright infringement, depends in part on
the mode of listening to music. Rights owners of sound recordings (e.g., record labels) pay music
publishers for the right to record and distribute the publishers’ musical works in a physical
format.27 Retail outlets that sell digital files or physical reproductions of sound recordings pay the
distribution subsidiaries of major record labels, which act as wholesalers.28 Purchasers of a
lawfully made reproduction, such as a compact disc or a digital file, may listen to that song as
often as they wish in a private setting.
Radio listeners have less control over when and where they listen to a song than they would if
they purchased the song outright. The Copyright Act does not require broadcast radio stations to
pay public performance royalties to record labels and artists, but it does require them to pay
public performance royalties to music publishers and songwriters for the use of notes and lyrics in
broadcast music.
In addition to mechanical rights, digital services must pay both record labels and music publishers
for public performance rights. Both traditional broadcast radio stations and music streaming
services that limit the ability of users to choose which songs they hear next (noninteractive
services) may make temporary reproductions of songs in the normal course of transmitting music
to listeners.29 The rights to make these temporary reproductions, known as “ephemeral
reproductions,” fall under 17 U.S.C. Section 112 (see “Ephemeral Reproductions”).
Users of an “on demand” or “interactive” music streaming service can listen to songs upon
request, an experience similar in some ways to playing a compact disc and in other ways to
listening to a radio broadcast. To enable multiple listeners to select songs, the services download
digital files to consumers’ devices. These digital reproductions are known as “conditional” or
“tethered” downloads, because consumers’ ability to listen to them upon request is conditioned
upon remaining subscribers to the interactive services.30 The services pay royalties to music
publishers or songwriters for the right to reproduce and distribute the musical works, and pay
royalties to record labels or artists for the right to reproduce and distribute sound recordings.31
Thus, while record labels reproduce and distribute their own sound recordings in physical form, they license such rights to interactive digital streaming services when the music is in digital
form.]
Question: [Explain how 17 U.S.C. Section 112 affects radio stations in layman's terms] |
Find the correct answer from the context, and respond in two sentences. Omit any filler. Respond only using information from the provided context. | Can you explain CAUTION? | Representative Michael McCaul introduced the Deterring America’s Technological Adversaries (DATA) Act (H.R. 1153, H.Rept. 118-63) on February 24, 2023. Among other provisions, the bill would require federal actions to protect the sensitive personal data of U.S. persons, with a particular focus on prohibiting the transfer of such data to foreign persons influenced by China. It would also require the Department of the Treasury to issue a directive prohibiting U.S. persons from engaging in any transaction with any person who knowingly provides or may transfer sensitive personal data subject to U.S. jurisdiction to any foreign person subject to Chinese influence. The bill was reported by the Committee on Foreign Affairs on May 16, 2023, and placed on the Union Calendar, Calendar No. 43, the same day.
Representative Kat Cammack introduced the Chinese-owned Applications Using the Information of Our Nation (CAUTION) Act of 2023 (H.R. 750) on February 2, 2023. The bill would require any person who sells or distributes the social media application TikTok (or any service developed or provided by ByteDance Ltd.) to disclose, prior to download, that the use of the application is prohibited on government-owned devices. The bill was ordered to be reported, amended, on March 9, 2023, by the House Committee on Energy and Commerce.
Representative Ken Buck introduced the No TikTok on United States Devices Act (H.R. 503) on January 25, 2023. Among other provisions, the bill would impose sanctions on the parent company of the TikTok social media service, ByteDance Ltd., as long as it is involved with TikTok. Specifically, the President would be required to impose property-blocking sanctions on ByteDance or any successor entity or subsidiary if it is involved in matters relating to (1) TikTok or any successor service; or (2) information, video, or data associated with such a service. Additionally, the bill would require the Office of the Director of National Intelligence (ODNI) to report to Congress on any national security threats posed by TikTok, including the ability of China’s government to access or use the data of U.S. users of TikTok. Within 180 days of the bill’s enactment, ODNI would be required to brief Congress on the implementation of the bill. On February 27, 2023, the bill was referred to the Subcommittee on the National Intelligence Enterprise.
Senator Josh Hawley introduced the No TikTok on United States Devices Act (S. 85) on January 25, 2023. The bill is substantially similar to H.R. 503. On January 25, 2023, the bill was referred to the Committee on Banking, Housing, and Urban Affairs. | Can you explain CAUTION?
Find the correct answer from the context, and respond in two sentences. Omit any filler. Respond only using information from the provided context.
Representative Michael McCaul introduced the Deterring America’s Technological Adversaries (DATA) Act (H.R. 1153, H.Rept. 118-63) on February 24, 2023. Among other provisions, the bill would require federal actions to protect the sensitive personal data of U.S. persons, with a particular focus on prohibiting the transfer of such data to foreign persons influenced by China. It would also require the Department of the Treasury to issue a directive prohibiting U.S. persons from engaging in any transaction with any person who knowingly provides or may transfer sensitive personal data subject to U.S. jurisdiction to any foreign person subject to Chinese influence. The bill was reported by the Committee on Foreign Affairs on May 16, 2023, and placed on the Union Calendar, Calendar No. 43, the same day.
Representative Kat Cammack introduced the Chinese-owned Applications Using the Information of Our Nation (CAUTION) Act of 2023 (H.R. 750) on February 2, 2023. The bill would require any person who sells or distributes the social media application TikTok (or any service developed or provided by ByteDance Ltd.) to disclose, prior to download, that the use of the application is prohibited on government-owned devices. The bill was ordered to be reported, amended, on March 9, 2023, by the House Committee on Energy and Commerce.
Representative Ken Buck introduced the No TikTok on United States Devices Act (H.R. 503) on January 25, 2023. Among other provisions, the bill would impose sanctions on the parent company of the TikTok social media service, ByteDance Ltd., as long as it is involved with TikTok. Specifically, the President would be required to impose property-blocking sanctions on ByteDance or any successor entity or subsidiary if it is involved in matters relating to (1) TikTok or any successor service; or (2) information, video, or data associated with such a service. Additionally, the bill would require the Office of the Director of National Intelligence (ODNI) to report to Congress on any national security threats posed by TikTok, including the ability of China’s government to access or use the data of U.S. users of TikTok. Within 180 days of the bill’s enactment, ODNI would be required to brief Congress on the implementation of the bill. On February 27, 2023, the bill was referred to the Subcommittee on the National Intelligence Enterprise.
Senator Josh Hawley introduced the No TikTok on United States Devices Act (S. 85) on January 25, 2023. The bill is substantially similar to H.R. 503. On January 25, 2023, the bill was referred to the Committee on Banking, Housing, and Urban Affairs. |
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | what is monkeypox and is it currently spreading? what is the direct effects of monkeypox and how dangerous is it? are there ways to prevent monkey pox? | Mpox (formerly known as monkeypox) is a disease caused by infection with a virus, known as Monkeypox virus. This virus is part of the same family as the virus that causes smallpox. People with mpox often get a rash, along with other symptoms. The rash will go through several stages, including scabs, before healing. Mpox is not related to chickenpox.
Mpox is a zoonotic disease, meaning it can be spread between animals and people. It is endemic, or found regularly, in parts of Central and West Africa. The virus that causes mpox has been found in small rodents, monkeys, and other mammals that live in these areas.
Discovery and History
The virus that causes mpox was discovered in 1958, when two outbreaks of a pox-like disease occurred in colonies of monkeys kept for research. Despite being named “monkeypox” originally, the source of the disease remains unknown. Scientists suspect African rodents and non-human primates (like monkeys) might harbor the virus and infect people.
The first human case of mpox was recorded in 1970, in what is now the Democratic Republic of the Congo. In 2022, mpox spread around the world. Before that, cases of mpox in other places were rare and usually linked to travel or to animals being imported from regions where mpox is endemic.
The World Health Organization renamed the disease in 2022 to follow modern guidelines for naming illnesses. Those guidelines recommend that disease names should avoid offending cultural, social, national, regional, professional or ethnic groups and minimize unnecessary negative effects on trade, travel, tourism or animal welfare. The virus that causes it still has its historic name, however.
Virus Types
Map of countries known to be endemic for clade I & 2 Mpox
View Larger
There are two types of the virus that causes mpox: clade I and clade II.
Clade I is responsible for the current rise of cases in Central and Eastern Africa. Historically, clade I caused higher numbers of severe illnesses than clade II, with up to 10% of people dying from it. Recent outbreaks have seen much lower death rates of about 1-3.3%.
Clade II is the type that caused the global outbreak that began in 2022. Infections from clade II mpox are generally less severe. More than 99.9% of people survive. Clade II is endemic to West Africa.
Both types of the virus can spread through:
Close contact (including intimate contact) with a person with mpox
Direct contact with contaminated materials
Direct contact with infected animals
Risk of Severe Disease
Although cases of mpox are not life-threatening, some people may be more likely to get severely ill, including
People with severely weakened immune systems
Children younger than 1
People with a history of eczema
People who are pregnant
Preventing Mpox
There are several ways you can protect yourself and others from mpox:
Getting vaccinated. Check with your healthcare provider to find out if the mpox vaccine is recommended for you.
Avoiding close, skin-to-skin contact with people who have a rash that looks like mpox.
Avoiding contact with materials that a person with mpox has used, including sharing eating utensils and cups, and handling their bedding or clothing.
If you do get mpox, isolate at home, and cover lesions and wear a mask if you must be around others.
If you are in an area of Western or Central Africa where mpox occurs regularly, avoid contact with live or dead wild animals. | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
what is monkeypox and is it currently spreading? what is the direct effects of monkeypox and how dangerous is it? are there ways to prevent monkey pox?
<TEXT>
Mpox (formerly known as monkeypox) is a disease caused by infection with a virus, known as Monkeypox virus. This virus is part of the same family as the virus that causes smallpox. People with mpox often get a rash, along with other symptoms. The rash will go through several stages, including scabs, before healing. Mpox is not related to chickenpox.
Mpox is a zoonotic disease, meaning it can be spread between animals and people. It is endemic, or found regularly, in parts of Central and West Africa. The virus that causes mpox has been found in small rodents, monkeys, and other mammals that live in these areas.
Discovery and History
The virus that causes mpox was discovered in 1958, when two outbreaks of a pox-like disease occurred in colonies of monkeys kept for research. Despite being named “monkeypox” originally, the source of the disease remains unknown. Scientists suspect African rodents and non-human primates (like monkeys) might harbor the virus and infect people.
The first human case of mpox was recorded in 1970, in what is now the Democratic Republic of the Congo. In 2022, mpox spread around the world. Before that, cases of mpox in other places were rare and usually linked to travel or to animals being imported from regions where mpox is endemic.
The World Health Organization renamed the disease in 2022 to follow modern guidelines for naming illnesses. Those guidelines recommend that disease names should avoid offending cultural, social, national, regional, professional or ethnic groups and minimize unnecessary negative effects on trade, travel, tourism or animal welfare. The virus that causes it still has its historic name, however.
Virus Types
Map of countries known to be endemic for clade I & 2 Mpox
View Larger
There are two types of the virus that causes mpox: clade I and clade II.
Clade I is responsible for the current rise of cases in Central and Eastern Africa. Historically, clade I caused higher numbers of severe illnesses than clade II, with up to 10% of people dying from it. Recent outbreaks have seen much lower death rates of about 1-3.3%.
Clade II is the type that caused the global outbreak that began in 2022. Infections from clade II mpox are generally less severe. More than 99.9% of people survive. Clade II is endemic to West Africa.
Both types of the virus can spread through:
Close contact (including intimate contact) with a person with mpox
Direct contact with contaminated materials
Direct contact with infected animals
Risk of Severe Disease
Although cases of mpox are not life-threatening, some people may be more likely to get severely ill, including
People with severely weakened immune systems
Children younger than 1
People with a history of eczema
People who are pregnant
Preventing Mpox
There are several ways you can protect yourself and others from mpox:
Getting vaccinated. Check with your healthcare provider to find out if the mpox vaccine is recommended for you.
Avoiding close, skin-to-skin contact with people who have a rash that looks like mpox.
Avoiding contact with materials that a person with mpox has used, including sharing eating utensils and cups, and handling their bedding or clothing.
If you do get mpox, isolate at home, and cover lesions and wear a mask if you must be around others.
If you are in an area of Western or Central Africa where mpox occurs regularly, avoid contact with live or dead wild animals.
https://www.cdc.gov/poxvirus/mpox/about/index.html |
ONLY USE THE DATA I PROVIDE
Limit your response to 250 words
If you cannot answer using the contexts alone, say "I cannot determine the answer to that due to lack of context" | What does it mean when a nail polish is 10-free? | What is 7 Free Nail Polish? And Why is 10 Free Even Better!
September 27, 2021 By Mary Lennon
Did you know that the nail polish you are currently using could be affecting your health in ways you may have never even thought?
Mainstream nail polishes contain several chemicals and other harsh ingredients that have the ability to cause severe adverse reactions alongside damaging health concerns.
As a result of this, you may have come across nail polish brands that claim to be 3-free, 5-free, or even 7-free, and it is likely that you have wondered what exactly is meant by those confusing phrases.
In this blog post, we will be covering everything that you need to know about non-toxic nail polish, as well as offer an explanation as to why the nail polish you are using right now could be bad for your health.
**The Importance Of Non-Toxic Nail Polish**
First off, to truly understand the importance of non-toxic nail polish, it is fundamental that one knows the negative health connotations associated with regular nail polish.
A 2015 study found that certain chemicals that are used in most bottles of nail polish, can be absorbed into the body via the nails and cause several damaging health effects to the user.
These damaged health effects vary greatly from person to person, but they can cause quite a devastating impact. Regular types of nail polish contain cancer-causing chemicals, can cause hormone imbalances, alongside thyroid issues, and increase the risk of diabetes should you be overexposing yourself to the product.
Though you may assume that you are not going to be affected by these chemicals, just think of how much you breathe in the scent of nail lacquer while painting your nails and how often the paint touches your skin, as well as the amount of contact your nails have with your mouth directly, or when preparing food for both you and your loved ones.
Fortunately, enough research has been established so that while regular nail polish used at your nail salon is sadly still able to promote itself, innovative and safe nail products have been developed with the health of the customer in mind.
Non- toxic nail polish comes in several variants, with each type excluding certain chemicals, ingredients, and products that may have highly damaging effects on the user. These types of non toxic nail polishes are often referred to as 3-free, 5-free, 7-free, or 10-free.
Here at Côte Nail Polish, all of our vegan nail polishes are cruelty-free, and most importantly, non-toxic. Our customers can be assured that their health is our optimal concern and that we will always endeavor to use the best and highest quality ingredients for our nail polish and accessories.
**What Is 7-Free Nail Polish?**
7-free nail polish is free of 7 toxic chemicals that are common in most regular nail polish formulas. The chemicals this nail lacquer is free from are as follows: dibutyl phthalate, formaldehyde, toluene, formaldehyde resin, camphor, ethyl tosylamide, and xylene.
Each of these chemicals can have a negative impact on the user’s health, and therefore, substituting them out of the polish formula will help to lessen the toxic effect of nail varnish.
There are also other variants of the free toxin nail polish, each omitting a specific number of chemicals:
3-Free:
This formula is void of the “Toxic Trio”: Dibutyl Phthalate (DPB), Formaldehyde, and Toluene. These three chemicals are by far the most harmful, as they are associated with some of the most harmful and debilitating diseases, including cancer and diabetes.
5-Free:
The chemicals excluded in 5- free include the above-mentioned “Toxic Trio” as well as formaldehyde resin, and camphor.
10-Free:
This type of formula omits dibutyl phthalate, TPHP, toluene, xylene, ethyl tosylamide, camphor, formaldehyde, formaldehyde resin, parabens, and gluten. Therefore, this formula of nail polish is considered to be one of the safest types of nail polish due to its substitution of toxic chemicals.
**Chemicals Excluded in 7-Free Polish**
Simply providing the names of the seven harmful chemicals left out of these 7- free polishes will not accurately represent the harm that these toxins can cause. In this section, we have outlined the chemicals that are excluded from seven free nail polishes, explaining the health issues that these can cause.
Dibutyl Phthalate (DPB):
DPB is a harsh chemical that affects the endocrine system, which controls hormone regulation. Too much DPB can cause issues with your thyroid, causing mental health issues as well as physical problems like fatigue. It can also hamper developmental growth and potentially affect reproductive health.
Formaldehyde:
This commonly used chemical is a known human carcinogen, meaning it has been known to increase the risk of cancer. It can also cause severe dermatitis, skin irritation, and eye irritation.
Toluene:
Toluene can cause birth defects in the children of pregnant women who are overly exposed to the chemical. It can also affect the nervous system, causing nausea, lightheadedness, and fatigue.
Formaldehyde Resin:
This chemical is a common allergen causing skin irritation, redness, and itching. It is used in nail lacquers to help solidify the liquid into a thicker texture.
Camphor:
Camphor can lead to poor nail health, which is the opposite of what you want when caring for and painting your nails. This is because the harsh toxins strip your nails of their essential nutrients, starving them of what they need to maintain their strength. This chemical can also cause disorientation, and more alarmingly, seizures.
Ethyl Tosylamide:
This chemical is banned in Europe due to potentially causing severe allergic reactions in users. The role of Ethyl Tosylamide is to help the polish to stick to the surface of the nail, though non-toxic nail polish also has great durability without the associated health risks.
Xylene:
This chemical is what gives your nail polish that distinct smell that often causes headaches or lightheadedness. It is used in nail polish to avoid clumpiness by thinning out the solution, but it is an incredibly toxic chemical that can cause immense reactions and irritation.
**Why 10-Free Nail Polish Is Even Better**
10-free nail polish is better than 7-free as it excludes some additional chemicals and products that are not only toxic but also environmentally damaging, and even unethical to be using.
Parabens:
Parabens are a group of preservatives that are used in polishes to aid longevity. Not only do parabens interfere with the hormonal system, but they're also a factor directly involved with well-researched carcinogens that can cause skin and breast cancer.
Triphenyl Phosphate:
Triphenyl phosphate is another harmful toxin often included in traditional nail polish to aid the malleability of the varnish. Ongoing or frequent or extended exposure to Triphenyl phosphate can cause changes in hormone regulation, affecting reproductive systems, as well as metabolism. Consequently, it is best to avoid this harmful chemical, especially in your nail polish.
Gluten:
Gluten may seem surprising, but for those with a gluten allergy or intolerance, the inclusion of gluten in their nail polish can lead to severe adverse reactions. For those with celiac disease, topical exposure to wheat products can cause irritation.
**Conclusion**
In summary, regular nail polishes can be extremely harmful to not only the health of your nails but to your body as a whole! We believe that educating people on the toxic ingredients in nail polish can help customers to make more informed decisions regarding their health.
At Côte, we are dedicated to sharing our belief in beautiful, clean, and safe nontoxic nail polishes, and that is why all our products are 10- free, cruelty free, with a non-toxic formula and amazing vibrant colors. Additionally, all of our products are vegan and free of studies involving lab animals.
| [System Instruction]
==================
ONLY USE THE DATA I PROVIDE
Limit your response to 250 words
If you cannot answer using the contexts alone, say "I cannot determine the answer to that due to lack of context"
----------------
[Passage]
==================
What is 7 Free Nail Polish? And Why is 10 Free Even Better!
September 27, 2021 By Mary Lennon
Did you know that the nail polish you are currently using could be affecting your health in ways you may have never even thought?
Mainstream nail polishes contain several chemicals and other harsh ingredients that have the ability to cause severe adverse reactions alongside damaging health concerns.
As a result of this, you may have come across nail polish brands that claim to be 3-free, 5-free, or even 7-free, and it is likely that you have wondered what exactly is meant by those confusing phrases.
In this blog post, we will be covering everything that you need to know about non-toxic nail polish, as well as offer an explanation as to why the nail polish you are using right now could be bad for your health.
**The Importance Of Non-Toxic Nail Polish**
First off, to truly understand the importance of non-toxic nail polish, it is fundamental that one knows the negative health connotations associated with regular nail polish.
A 2015 study found that certain chemicals that are used in most bottles of nail polish, can be absorbed into the body via the nails and cause several damaging health effects to the user.
These damaged health effects vary greatly from person to person, but they can cause quite a devastating impact. Regular types of nail polish contain cancer-causing chemicals, can cause hormone imbalances, alongside thyroid issues, and increase the risk of diabetes should you be overexposing yourself to the product.
Though you may assume that you are not going to be affected by these chemicals, just think of how much you breathe in the scent of nail lacquer while painting your nails and how often the paint touches your skin, as well as the amount of contact your nails have with your mouth directly, or when preparing food for both you and your loved ones.
Fortunately, enough research has been established so that while regular nail polish used at your nail salon is sadly still able to promote itself, innovative and safe nail products have been developed with the health of the customer in mind.
Non- toxic nail polish comes in several variants, with each type excluding certain chemicals, ingredients, and products that may have highly damaging effects on the user. These types of non toxic nail polishes are often referred to as 3-free, 5-free, 7-free, or 10-free.
Here at Côte Nail Polish, all of our vegan nail polishes are cruelty-free, and most importantly, non-toxic. Our customers can be assured that their health is our optimal concern and that we will always endeavor to use the best and highest quality ingredients for our nail polish and accessories.
**What Is 7-Free Nail Polish?**
7-free nail polish is free of 7 toxic chemicals that are common in most regular nail polish formulas. The chemicals this nail lacquer is free from are as follows: dibutyl phthalate, formaldehyde, toluene, formaldehyde resin, camphor, ethyl tosylamide, and xylene.
Each of these chemicals can have a negative impact on the user’s health, and therefore, substituting them out of the polish formula will help to lessen the toxic effect of nail varnish.
There are also other variants of the free toxin nail polish, each omitting a specific number of chemicals:
3-Free:
This formula is void of the “Toxic Trio”: Dibutyl Phthalate (DPB), Formaldehyde, and Toluene. These three chemicals are by far the most harmful, as they are associated with some of the most harmful and debilitating diseases, including cancer and diabetes.
5-Free:
The chemicals excluded in 5- free include the above-mentioned “Toxic Trio” as well as formaldehyde resin, and camphor.
10-Free:
This type of formula omits dibutyl phthalate, TPHP, toluene, xylene, ethyl tosylamide, camphor, formaldehyde, formaldehyde resin, parabens, and gluten. Therefore, this formula of nail polish is considered to be one of the safest types of nail polish due to its substitution of toxic chemicals.
**Chemicals Excluded in 7-Free Polish**
Simply providing the names of the seven harmful chemicals left out of these 7- free polishes will not accurately represent the harm that these toxins can cause. In this section, we have outlined the chemicals that are excluded from seven free nail polishes, explaining the health issues that these can cause.
Dibutyl Phthalate (DPB):
DPB is a harsh chemical that affects the endocrine system, which controls hormone regulation. Too much DPB can cause issues with your thyroid, causing mental health issues as well as physical problems like fatigue. It can also hamper developmental growth and potentially affect reproductive health.
Formaldehyde:
This commonly used chemical is a known human carcinogen, meaning it has been known to increase the risk of cancer. It can also cause severe dermatitis, skin irritation, and eye irritation.
Toluene:
Toluene can cause birth defects in the children of pregnant women who are overly exposed to the chemical. It can also affect the nervous system, causing nausea, lightheadedness, and fatigue.
Formaldehyde Resin:
This chemical is a common allergen causing skin irritation, redness, and itching. It is used in nail lacquers to help solidify the liquid into a thicker texture.
Camphor:
Camphor can lead to poor nail health, which is the opposite of what you want when caring for and painting your nails. This is because the harsh toxins strip your nails of their essential nutrients, starving them of what they need to maintain their strength. This chemical can also cause disorientation, and more alarmingly, seizures.
Ethyl Tosylamide:
This chemical is banned in Europe due to potentially causing severe allergic reactions in users. The role of Ethyl Tosylamide is to help the polish to stick to the surface of the nail, though non-toxic nail polish also has great durability without the associated health risks.
Xylene:
This chemical is what gives your nail polish that distinct smell that often causes headaches or lightheadedness. It is used in nail polish to avoid clumpiness by thinning out the solution, but it is an incredibly toxic chemical that can cause immense reactions and irritation.
**Why 10-Free Nail Polish Is Even Better**
10-free nail polish is better than 7-free as it excludes some additional chemicals and products that are not only toxic but also environmentally damaging, and even unethical to be using.
Parabens:
Parabens are a group of preservatives that are used in polishes to aid longevity. Not only do parabens interfere with the hormonal system, but they're also a factor directly involved with well-researched carcinogens that can cause skin and breast cancer.
Triphenyl Phosphate:
Triphenyl phosphate is another harmful toxin often included in traditional nail polish to aid the malleability of the varnish. Ongoing or frequent or extended exposure to Triphenyl phosphate can cause changes in hormone regulation, affecting reproductive systems, as well as metabolism. Consequently, it is best to avoid this harmful chemical, especially in your nail polish.
Gluten:
Gluten may seem surprising, but for those with a gluten allergy or intolerance, the inclusion of gluten in their nail polish can lead to severe adverse reactions. For those with celiac disease, topical exposure to wheat products can cause irritation.
**Conclusion**
In summary, regular nail polishes can be extremely harmful to not only the health of your nails but to your body as a whole! We believe that educating people on the toxic ingredients in nail polish can help customers to make more informed decisions regarding their health.
At Côte, we are dedicated to sharing our belief in beautiful, clean, and safe nontoxic nail polishes, and that is why all our products are 10- free, cruelty free, with a non-toxic formula and amazing vibrant colors. Additionally, all of our products are vegan and free of studies involving lab animals.
----------------
[Question]
==================
What does it mean when a nail polish is 10-free? |
Use only the document provided.
If the question can not be answered then respond with 'I am unable to answer this request' | What are some ICT advances in the field of social touch? | REVIEW
published: 27 May 2015
doi: 10.3389/fdigh.2015.00002
Social touch in human–computer
interaction
Jan B. F. van Erp 1,2 * and Alexander Toet 1
1
Perceptual and Cognitive Systems, TNO, Soesterberg, Netherlands, 2 Human Media Interaction, University of Twente,
Enschede, Netherlands
Edited by:
Yoram Chisik,
University of Madeira, Portugal
Reviewed by:
Mohamed Chetouani,
Université Pierre et Marie Curie,
France
Gualtiero Volpe,
Università degli Studi di Genova, Italy
Hongying Meng,
Brunel University London, UK
*Correspondence:
Jan B. F. van Erp,
TNO Human Factors, Kampweg 5,
Soesterberg 3769DE, Netherlands
[email protected]
Specialty section:
This article was submitted to
Human-Media Interaction, a section
of the journal Frontiers in Digital
Humanities
Received: 06 February 2015
Paper pending published:
19 March 2015
Accepted: 08 May 2015
Published: 27 May 2015
Citation:
van Erp JBF and Toet A (2015) Social
touch in human–computer interaction.
Front. Digit. Humanit. 2:2.
doi: 10.3389/fdigh.2015.00002
Touch is our primary non-verbal communication channel for conveying intimate emotions
and as such essential for our physical and emotional wellbeing. In our digital age, human
social interaction is often mediated. However, even though there is increasing evidence
that mediated touch affords affective communication, current communication systems
(such as videoconferencing) still do not support communication through the sense of
touch. As a result, mediated communication does not provide the intense affective
experience of co-located communication. The need for ICT mediated or generated
touch as an intuitive way of social communication is even further emphasized by the
growing interest in the use of touch-enabled agents and robots for healthcare, teaching,
and telepresence applications. Here, we review the important role of social touch in
our daily life and the available evidence that affective touch can be mediated reliably
between humans and between humans and digital agents. We base our observations
on evidence from psychology, computer science, sociology, and neuroscience with
focus on the first two. Our review shows that mediated affective touch can modulate
physiological responses, increase trust and affection, help to establish bonds between
humans and avatars or robots, and initiate pro-social behavior. We argue that ICT
mediated or generated social touch can (a) intensify the perceived social presence of
remote communication partners and (b) enable computer systems to more effectively
convey affective information. However, this research field on the crossroads of ICT and
psychology is still embryonic and we identify several topics that can help to mature the
field in the following areas: establishing an overarching theoretical framework, employing
better research methodologies, developing basic social touch building blocks, and solving
specific ICT challenges.
Keywords: affective touch, mediated touch, social touch, interpersonal touch, human–computer interaction,
human–robot interaction, haptic, tactile
Introduction
Affective Touch in Interpersonal Communication
The sense of touch is the earliest sense to develop in a human embryo (Gottlieb 1971)
and is critical for mammals’ early social development and to grow up healthily (Harlow and
Zimmermann 1959; Montagu 1972). The sense of touch is one of the first mediums of communication between newborns and parents. Interpersonal communication is to a large extent
non-verbal and one of the primary purposes of non-verbal behavior is to communicate emotional states. Non-verbal communication includes facial expressions, prosody, gesture, and touch
Frontiers in Digital Humanities | www.frontiersin.org
1
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
(Argyle 1975; Knapp and Hall 2010) of which touch is the primary
modality for conveying intimate emotions (Field 2010; Morrison
et al. 2010; App et al. 2011), for instance, in greetings, in corrections, and in (sexual) relationships. As touch implies direct
physical interaction and co-location, it inherently has the potential
to elicit feelings of social presence. The importance of touch as a
modality in social communication is highlighted by the fact that
the human skin has specific receptors to process affective touch
(“the skin as a social organ”: Morrison et al. 2010) in addition
to those for discriminative touch (Löken et al. 2009; Morrison
et al. 2011; Gordon et al. 2013; McGlone et al. 2014), presumably
like all mammals (Vrontou et al. 2013). ICT systems can employ
human touch for information processing (discriminative touch)
and communication (social touch) as well.
Field (2010) and Gallace and Spence (2010)]. For these reasons,
mediated interpersonal touch is our first topic of interest.
Human–computer interaction applications increasingly deploy
intelligent agents to support the social aspects of the interaction.
Social agents (either embodied or virtual) already employ vision
and audition to communicate social signals but generally lack
touch capabilities. If we look at applications in robots and avatars,
the first applications including touch facilitated information from
user to system only, e.g., in the form of a touch screen or through
specific touch sensors in a tangible interface. Social agents that can
touch the user are of much more recent date. We believe that social
agents could benefit from generating and perceiving social touch
cues (van Erp 2012). Based on studies reviewed in this paper, we
expect that people will feel a closer bond with agents or robots
that use and respond to affective touch since they appear more
human than machine-like and more trustworthy. Touch-enabled
social agents are therefore our second topic of interest.
Discriminative Touch in ICT Systems
Conventional systems for human–computer interaction only
occasionally employ the sense of touch and mainly provide information through vision and audition. One of the first large-scale
applications of a tactile display was the vibration function on
mobile phones, communicating the 1-bit message of an incoming
call, and the number of systems that include the sense of touch
has steadily increased over the past two decades. An important
reason for the sparse use of touch is the supposed low bandwidth of the touch channel (Gallace et al. 2012). Although often
underestimated, our touch sense is very well able to process large
amounts of abstract information. For instance, blind people who
are trained in Braille reading can actually read with their fingertips. This information processing capability is increasingly applied
in our interaction with systems, and more complex information
is being displayed, e.g., to reduce the risk of visual and auditory
overload in car driving, to make us feel more immersed in virtual
environments, or to realistically train and execute certain medical
skills (van Erp and van Veen 2004; Self et al. 2008).
Touch in Social Communication
Social touch can take many forms in our daily lifes such as greetings (shaking hands, embracing, kissing, backslapping, and cheektweaking), in intimate communication (holding hands, cuddling,
stroking, back scratching, massaging), and in corrections (punishment, spank on the bottom). Effects of social touch are apparent
at many levels ranging from physiology to social behavior as we
will discuss in the following sections.
Social touches can elicit a range of strong experiences between
pleasant and unpleasant, depending on among others the stimulus
[e.g., unpleasant pinches evoking pain (nociception)] and location
on the body (e.g., pleasant strokes in erogenous zones). In addition to touch in communication, touch can also be employed in
psychotherapy (Phelan 2009) and nursing (Gleeson and Timmins
2005). Examples range from basic comforting touches and massaging to alternative therapies such as acu-pressure, Reiki, vibroacoustic therapy, and low-frequency vibration (Wigram 1996;
Kvam 1997; Patrick 1999; Puhan et al. 2006; Prisby et al. 2008).
See Dijk et al. (2013) for more examples on mental, healthrelated, and bodily effects of touch. In this paper, we focus on
ICT mediated and generated social touch (the areas where psychology and computer science meet), meaning that areas of, for
instance, Reiki and low-frequency vibration fall outside the scope
of this paper. We first discuss the many roles of social touch in
our daily life before continuing with ICT mediated inter-human
touch and ICT generated and interpreted touch in human–agent
interaction.
In 1990s (Vallbo et al. 1993), the first reports on so-called C
tactile afferents in human hairy skin were published. This neurophysiological channel in the skin reacts to soft, stroking touches,
and its activity strongly depends on stroking speed (with an optimum in the speed range 3–10 cm/s) and has a high correlation
with subjective ratings of the pleasantness of the touch. Research
over the past decades has shown that this system is not involved
in discriminative touch (Olausson et al. 2008) but underlies the
emotional aspects of touch and the development and function of
the social brain (McGlone et al. 2014). Social touches may activate
both this pleasurable touch system and the discriminative touch
Affective Touch in ICT Systems
Incorporating the sense of touch in ICT systems started with
discriminative touch as an information channel, often in addition to vision and audition (touch for information processing).
We believe that we are on the averge of a second transition:
adding social or affective touch to ICT systems (touch for social
communication). In our digital era, an increasing amount of our
social interactions is mediated, for example, through (cell) phones,
video conferencing, text messaging, chat, or e-mail. Substituting
direct contact, these modern technologies make it easy to stay
in contact with distant friends and relatives, and they afford
some degree of affective communication. For instance, an audio
channel can transmit affective information through phonetic features like amplitude variation, pitch inflections, tempo, duration,
filtration, tonality, or rhythm, while a video channel supports nonverbal information such as facial expressions and body gestures.
However, current communication devices do not allow people to
express their emotions through touch and may therefore lack a
convincing experience of actual togetherness (social presence).
This technology-induced touch deprivation may even degrade
the potential beneficial effects of mediated social interaction [for
reviews of the negative side effects of touch deprivation see
Frontiers in Digital Humanities | www.frontiersin.org
2
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
system (reacting to, for instance, pressure, vibration, and skin
stretch).
caring, agreement, gratitude, and moral support. Cold feedback
was consistently associated with negative issues.
Touch, Physiological Functioning, and Wellbeing
Touch to Communicate Emotions
Hertenstein et al. (2006, 2009) showed that touch alone can effectively be used to convey distinct emotions such as anger, fear,
and disgust. In addition, touch plays a role in communicating
more complex social messages like trust, receptivity, affection
(Mehrabian 1972; Burgoon 1991) and nurture, dependence, and
affiliation (Argyle 1975). Touch can also enhance the meaning
of other forms of verbal and non-verbal communication, e.g.,
touch amplifies the intensity of emotional displays from our face
and voice (Knapp and Hall 2010). Examples of touches used to
communicate emotions are shaking, pushing, and squeezing to
communicate anger, hugging, patting, and stroking to communicate love (Gallace and Spence 2010). Jones and Yarbrough (1985)
stated that a handshake, an encouraging pat on the back, a sensual
caress, a nudge for attention, a tender kiss, or a gentle brush of the
shoulder can all convey a vitality and immediacy that is at times
far more powerful than language. According to App et al. (2011),
touch is the preferred non-verbal communication channel for
conveying intimate emotions like love and sympathy, confirmed
by, for instance, Debrot et al. (2013) who showed that responsive
touch between romantic partners enhances their affective state.
McCance and Otley (1951) showed that licking and stroking of the
mother animal is critical to start certain physiological processes in
a new-born mammal. This indicates the direct link between skin
stimulation and physiological processes, a link that is preserved
later in life. For instance, gentle stroking touch can lower heart
rate and blood pressure (Grewen et al. 2003), increase transient
sympathetic reflexes and increase pain thresholds (Drescher et al.
1980; Uvnäs-Moberg 1997), and affect the secretion of stress
hormones (Whitcher and Fisher 1979; Shermer 2004; Ditzen et al.
2007). Women holding their partner’s hand showed attenuated
threat-related brain activity in response to mild electric shocks
(Coan et al. 2006) and reported less pain in a cold pressor task
(Master et al. 2009). Touch can also result in coupling or syncing of
electrodermal activity of interacting (romantic) couples (ChatelGoldman et al. 2014). Interpersonal touch is the most commonly
used method of comforting (Dolin and Booth-Butterfield 1993)
and an instrument in nursing care (Bush 2001, Chang 2001,
Henricson et al. 2008). For example, patients who were touched
by a nurse during preoperative instructions experienced lower
subjective and objective stress levels (Whitcher and Fisher 1979),
than people who were not.
In addition to touch affecting hormone levels, hormones
(i.e., oxytocin) also affect the perception of interpersonal touch.
Scheele et al. (2014) investigated the effect of oxytocin on the
perception of a presumed male or female touch on male participants and found that oxytocin increased the rated pleasantness
and brain activity of presumed female touches but not of male
touches (all touches were delivered by the same female experimenter). Ellingsen et al. (2014) reported that after oxytocin submission, the effect of touch on the evaluation of facial expression
increased. In addition, touch (handshaking in particular) can also
play a role in social chemo-signaling. Handshaking can lead to
the exchange of chemicals in sweat and behavioral data indicates
that people more often sniff their hands after a greeting with
a handshake than without a handshake (Frumin et al. 2015).
Many social touches are reciprocal in nature (like cuddling and
holding hands) and their dynamics rely on different mechanisms
all having their own time scale: milliseconds for the detection of
a touch (discriminative touch), hundreds of milliseconds and up
for the experience of pleasurable touch, and seconds and up for
physiological responses (including changes in hormone levels).
How these processes interact and possibly reinforce each other is
still terra incognita.
Physiological responses can also be indirect, i.e., the result of
social or empathetic mechanisms. Cooper et al. (2014) recently
showed that the body temperature of people decreased when
looking at a video of other people putting their hands in cold
water. Another recent paradigm is to use thermal and haptically
enhanced interpersonal speech communication. This showed that
warm and cold signals were used to communicate the valence
of messages (IJzerman and Semin 2009; Suhonen et al. 2012a).
Warm messages were used to emphasize positive feelings and
pleasant experiences, and to express empathy, comfort, closeness,
Frontiers in Digital Humanities | www.frontiersin.org
Touch to Elicit Emotions
Not only can the sense of touch be used to communicate distinct
emotions but also to elicit (Suk et al. 2009) and modulate human
emotion. Please note that interpreting communicated emotions
differs from eliciting emotions as the former may be considered
as a cognitive task not resulting in physiological responses, e.g.,
one can perceive a touch as communicating anger without feeling
angry. Starting with the James–Lange theory (James 1884; Cannon
1927; Damasio 1999), the conscious experience of emotion is
the brain’s interpretation of physiological states. The existence of
specific neurophysiological channels for affective touch and pain
and the direct physiological reactions to touch indicate that there
may be a direct link between tactile stimulation, physiological
responses, and emotional experiences. Together with the distinct
somatotopic mapping between bodily tactile sensations and different emotional feelings as found by Nummenmaa et al. (2013),
one may assume that tactile stimulation of different bodily regions
can elicit a wide range of emotions.
Touch as a Behavior Modulator
In addition to communicating and eliciting emotions, touch provides an effective means of influencing people’s attitudes toward
persons, places, or services, their tendency to create bonds and
their (pro-)social behaviors [see Gallace and Spence (2010) for
an excellent overview]. This effect is referred to as the Midas
touch: a brief, casual touch (often at the hand or arm) that is
not necessarily consciously perceived named after king Midas
from Greek mythology who had the ability to turn everything he
touched into gold. For example, a half-second of hand-to-hand
touch from a librarian fostered more favorable impressions of the
library (Fisher et al. 1976), touching by a salesperson increased
positive evaluations of the store (Hornik 1992), and touch can
3
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
also boost the attractiveness ratings of the toucher (Burgoon et al.
1992). Recipients of such “simple” Midas touches are also more
likely to be more compliant or unselfish: willing to participate in
a survey (Guéguen 2002) or to adhere to medication (Guéguen
et al. 2010), volunteering for demonstrating in a course (Guéguen
2004), returning money left in a public phone (Kleinke 1977),
spending more money in a shop (Hornik 1992), tipping more in
a restaurant (Crusco and Wetzel 1984), helping with picking-up
dropped items (Guéguen and Fischer-Lokou 2003), or giving away
a cigarette (Joule and Guéguen 2007). In addition to these oneon-one examples, touch also plays a role in teams. For instance,
physical touch enhances team performance of basketball players
through building cooperation (Kraus et al. 2010). In clinical and
professional situations, interpersonal touch can increase information flow and causes people to evaluate communication partners
more favorably (Fisher et al. 1976).
and Watts 2010; Tsetserukou 2010), pokes (Park et al. 2011),
handholding (Gooch and Watts 2012; Toet et al. 2013), handshakes (Bailenson et al. 2007), strokes on the hand (Eichhorn et al.
2008), arm (Huisman et al. 2013) and cheek (Park et al. 2012),
pinches, tickles (Furukawa et al. 2012), pats (Bonanni et al. 2006),
squeezes (Rantala et al. 2013), thermal signals (Gooch and Watts
2010; Suhonen et al. 2012a,b), massages (Chung et al. 2009), and
intimate sexual touches (Solon 2015).
In addition to direct mediation, there is also an option to use
indirect ways, for instance, through avatars in a virtual world.
Devices like a haptic-jacket system can enhance the communication between users of virtual worlds such as Second Life by
enabling the exchange of touch cues resembling encouraging pats
and comforting hugs between users and their respective avatars
(Hossain et al. 2011). The Huggable is a semi-autonomous robotic
teddy bear equipped with somatic sensors, intended to facilitate
affective haptic communication between two people (Lee et al.
2009) through a tangible rather than a virtual interface. Using
these systems, people can not only exchange messages but also
emotionally and physically feel the social presence of the communication partner (Tsetserukou and Neviarouskaya 2010).
The above examples can be considered demonstrations of the
potential devices and applications and the richness of social touch.
Although it appears that virtual interfaces can effectively transmit
emotion even with touch cues that are extremely degraded (e.g., a
handshake that is lacking grip, temperature, dryness, and texture:
Bailenson et al. 2007), the field lacks rigorous validation and systematic exploration of the critical parameters. The few exceptions
are the work by Smith and MacLean (2007) and by Salminen
et al. (2008). Smith and MacLean performed an extensive study
into the possibilities and the design space of an interpersonal
haptic link and concluded that emotion can indeed be communicated through this medium. Salminen et al. (2008) developed a
friction-based horizontally rotating fingertip stimulator to investigate emotional experiences and behavioral responses to haptic
stimulation and showed that people can rate these kind of stimuli
as less or more unpleasant, arousing, avoidable, and dominating.
Mediated Social Touch
In the previous section, we showed that people communicate
emotions through touch, and that inter-human touch can enhance
wellbeing and modulate behavior. In interpersonal communication, we may use touch more frequently than we are aware of.
Currently, interpersonal communication is often mediated and
given the inherent human need for affective communication,
mediated social interaction should preferably afford the same
affective characteristics as face-to-face communication. However,
despite the social richness of touch and its vital role in human
social interaction, existing communication media still rely on
vision and audition and do not support haptic interaction. For
a more in-depth reflection on the general effects of mediated
interpersonal communication, we refer to Konijn et al. (2008) and
Ledbetter (2014).
Tactile or kinesthetic interfaces in principle enable haptic communication between people who are physically apart, and may
thus provide mediated social touch, with all the physical, emotional, and intellectual feedback it supplies (Cranny-Francis 2011).
Recent experiments show that even simple forms of mediated
touch have the ability to elicit a wide range of distinct affective
feelings (Tsalamlal et al. 2014). This finding has stimulated the
study and design of devices and systems that can communicate,
elicit, enhance, or influence the emotional state of a human by
means of mediated touch.
Remote Collaboration Between Groups
Collaborative virtual environments are increasingly used for distance education [e.g., Mikropoulos and Natsis (2011)], training
simulations [e.g., Dev et al. (2007) and Flowers and Aggarwal
(2014)], therapy treatments (Bohil et al. 2011), and for social
interaction venues (McCall and Blascovich 2009). It has been
shown that adding haptic feedback to the interaction between
users of these environments significantly increases their perceived
social presence (Basdogan et al. 2000; Sallnäs 2010).
Another recent development is telepresence robots that enable
users to physically interact with geographically remote persons
and environments. Their ultimate goal is to provide users with
the illusion of a physical presence in remote places. Telepresence
robots combine physical and remote presence and have a wide
range of potential social applications like remote embodied teleconferencing and teaching, visiting or monitoring elderly in care
centers, and making patient rounds in medical facilities (Kristoffersson et al. 2013). To achieve an illusion of telepresence, the
robot should be able to reciprocate the user’s behavior and to
Remote Communication Between Partners
Intimacy is of central importance in creating and maintaining
strong emotional bonds. Humans have an important social and
personal need to feel connected in order to maintain their interpersonal relationships (Kjeldskov et al. 2004). A large part of their
interpersonal communication is emotional rather than factual
(Kjeldskov et al. 2004).
The vibration function on a mobile phone has been used to
render emotional information for blind users (Réhman and Liu
2010) and a similar interface can convey emotional content in
instant messaging (Shin et al. 2007). Also, a wide range of systems
have been developed for the mediated representation of specific
touch events between dyads such as kisses (Saadatian et al. 2014),
hugs (Mueller et al. 2005; Cha et al. 2008; Teh et al. 2008; Gooch
Frontiers in Digital Humanities | www.frontiersin.org
4
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
provide the user with real-time multisensory feedback. As far as
we are aware of, systems including the sense of touch have not been
described yet.
provided by the user to the system, and closing the loop between
these signals.
Generating Social Touch Signals
Lemmens et al. (2009) tested tactile jackets (and later blankets)
to increase emotional experiences while watching movies and
reported quite strong effects of well-designed vibration patterns.
Dijk et al. (2013) developed a dance vest for deaf teenagers. This
vest included an algorithm that translated music into vibration
patterns presented through the vest. Although not generated by a
social entity, experiencing music has a substantial emotional part
as did the automatically generated vibration patterns.
Beyond the scripted and one-way social touch cues employed
in the examples above, human–computer interaction applications
increasingly deploy intelligent agents to support the social aspects
of the interaction (Nijholt 2014). Social agents are used to communicate, express, and perceive emotions, maintain social relationships, interpret natural cues, and develop social competencies
(Fong et al. 2003; Li et al. 2011). Empathic communication in
general may serve to establish and improve affective relations with
social agents (Bickmore and Picard 2005), and may be considered
as a fundamental requirement for social agents that are designed to
function as social companions and therapists (Breazeal 2011). Initial studies have shown that human interaction with social robots
can indeed have therapeutic value (Kanamori et al. 2003; Wada
and Shibata 2007; Robinson et al. 2013). These agents typically use
facial expressions, gesture, and speech to convey affective cues to
the user. Social agents (either physically embodied as, e.g., robots
or represented as on-screen virtual agents) may also use (mediated) touch technology to communicate with humans (Huisman
et al. 2014a). In this case, the touch cue is not only mediated but
also generated and interpreted by an electronic system instead of
a human.
The physical embodiment of robots gives them a direct capability to touch users, while avatars may use the technology designed
for other HCI or mediated social touch applications to virtually
touch their user. Several devices have been proposed that enable
haptic interaction with virtual characters (Hossain et al. 2011;
Rahman and El Saddik 2011; Huisman et al. 2014a). Only few
studies investigated autonomous systems that touch users for
affective or therapeutic purposes (Chen et al. 2011), or that use
touch to communicate the affective state of artificial creatures to
their users (Yohanan and MacLean 2012).
Reactions to Mediated Touch at a Physiological,
Behavioral, and Social Level
Although the field generally lacks serious validation studies, there
is mounting evidence that people use, experience, and react to
direct and mediated social touch in similar ways Bailenson and Yee
(2007), at the physiological, psychological, behavioral, and social
level.
At a physiological and psychological level, mediated affective
touch on the forearm can reduce heart rate of participants that
experienced a sad event (Cabibihan et al. 2012). Mediated touch
affects the quality of a shared experience and increases the intimacy felt toward the other person (Takahashi et al. 2011). Stimulation of someone’s hand through mediated touch can modulate
the quality of a remotely shared experience (e.g., the hilariousness
of a movie) and increase sympathy for the communication partner
(Takahashi et al. 2011). In a storytelling paradigm, participants
experienced a significantly higher degree of connectedness with
the storyteller when the speech was accompanied by remotely
administered squeezes in the upper arm (Wang et al. 2012).
Additional evidence for the potential effects of mediated touch
are found in the fact that hugging a robot medium while talking
increases affective feelings and attraction toward a conversation
partner (Kuwamura et al. 2013; Nakanishi et al. 2013). Participants receiving tactile facial stimulation experienced a stranger
receiving similar stimulation to be closer, more positive and more
similar to themselves when they were provided with synchronous
visual feedback (Paladino et al. 2010).
At a behavioral level, the most important observation is that
the effect of a mediated touch on people’s pro-social behavior is
similar to that of a real touch. According to Haans and IJsselsteijn
(2009a), a virtual Midas touch has effects in the same order of
magnitude as a real Midas touch. At the social level, the use
of mediated touch is only considered appropriate as a means of
communication between people in close personal relationships
(Rantala et al. 2013), and the mere fact that two people are willing
to touch implies an element of trust and mutual understanding
(Collier 1985). The interpretation of mediated touch depends on
the type of interrelationship between sender and receiver (Rantala
et al. 2013), similar to direct touch (Coan et al. 2006; Thompson
and Hampton 2011) and like direct touch, mediated touch communication between strangers can cause discomfort (Smith and
MacLean 2007).
Recognizing and Interpreting Social Touch
Signals
Communication implies a two-way interaction and social robots
and avatars should therefore not only be able to generate but
also to recognize affectionate touches. For instance, robotic affective responses to touch may contribute to people’s quality of life
(Cooney et al. 2014). Touch capability is not only “nice to have”
but may even be a necessity: people expect social interaction with
embodied social agents to the extent that physical embodiment
without tactile interaction results in a negative appraisal of the
robot (Lee et al. 2006). In a recent study on the suitability of social
robots for the wellbeing of the elderly, all participants expressed
their wish for the robot to feel pleasant to hold or stroke and to
Social Touch Generated by ICT Systems
The previous chapter dealt with devices that enable interpersonal
social touch communication, i.e., a situation in which the touch
signals are generated and interpreted by human users and only
mediated through information and communication technology.
One step beyond this is to include social touch in the communication between a user and a virtual entity. This implies
three additional challenges: the generation of social touch signals
from system to user, the interpretation of social touch signals
Frontiers in Digital Humanities | www.frontiersin.org
5
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
respond to touch (Hutson et al. 2011). The well-known example of
the pet seal Paro (Wada et al. 2010) shows how powerful a simple
device can be in evoking social touches. Paro responds sec to being
touched but does neither interpret social touch nor produce touch.
Similar effects are reported for touching a humanoid robot on the
shoulder: just being able to touch already significantly increases
trust toward the robot (Dougherty and Scharfe 2011).
Automatic recognition and interpretation of the affective content of human originated social touch is essential to support this
interaction (Argall and Billard 2010). Different approaches to
equipping robots with a sense of touch include covering them
with an artificial skin that simulates the human somatosensory
systems (Dahiya et al. 2010) or the use of fully embodied robots
covered with a range of different (e.g., temperature, proximity,
pressure) sensors (Stiehl et al. 2005). To fully capture a social
touch requires sensors that go beyond those used in the more
advanced area of haptics and that primarily involve discriminative
touch (e.g., contact, pressure, resistance). At least sensors for temperature and soft, stroking touch should be included to capture
important parameters of social touch. However, just equipping
a system (robot, avatar, or interface) with touch sensors is not
sufficient to enable affective haptic interaction. A system can
only appreciate and respond to affective touch in a natural way
when it is able (a) to determine where the touch was applied,
(b) to assess what kind of tactile stimulation was applied, and
(c) to appraise the affective quality of the touch (Nguyen et al.
2007). While video- and audio-based affect recognition have been
widely investigated (Calvo and D’Mello 2010), there have only
been a few studies on touch-based affect recognition. The results
of these preliminary studies indicate that affect recognition based
on tactile interaction between humans and robots is comparable
to that between humans (Naya et al. 1999; Cooney et al. 2012;
Altun and MacLean 2014; Jung et al. 2014; van Wingerden et al.
2014).
Research on capturing emotions from touch input to a computer system (i.e., not in a social context) confirms the potential
of the touch modality (Zacharatos et al. 2014). Several research
groups worked on capturing emotions from traditional computer
input devices like mouse and keyboard based on the assumption
that a user’s emotional state affects the motor output system. A
general finding is that typing speed correlates to valence with
a decrease in typing speed for negative valence and increased
speed for positive valence compared to typing speed in neutral
emotional state (Tsihrintzis et al. 2008; Khanna and Sasikumar
2010). A more informative system includes the force pattern of
the key strokes. Using this information, very high-accuracy rates
(>90%) are reported (Lv et al. 2008) for categorizing six emotional
states (neutral, anger, fear, happiness, sadness, and surprise).
This technique requires force sensitive keyboards, which are not
widely available. Touch screens are used by an increasing number of people and offer much richer interaction parameters than
keystrokes such as scrolling, tapping, or stroking. Recent work by
Gao et al. (2012) showed that in a particular game played on the
iPod, touch inputs like stroke length, pressure, and speed were
important features related to a participant’s verbal description of
the emotional experience during the game. Using a linear SVM,
classification performance reached 77% for four emotional classes
Frontiers in Digital Humanities | www.frontiersin.org
(excited, relaxed, frustrated, and bored), close to 90% for two levels
of arousal, and close to 85% for two levels of valence.
Closing the Loop
A robot that has the ability to “feel,” “understand,” and “respond”
to touch in a human-like way will be capable of more intuitive and
meaningful interaction with humans. Currently, artificial entities
that include touch capabilities either produce or interpret social
touch, but not both. However, both are required to close the loop
and come to real, bidirectional interaction. The latter may require
strict adherence to, for instance, timing and immediacy; a handshake in which the partners are out-of-phase can be very awkward.
And as Cranny-Francis (2011) states, violating the tactile regime
may result in being rejected as alien and may seriously offend
others.
Reactions to Touching Robots and Avatars at a
Physiological, Behavioral, and Social Level
Although there are still very few studies in this field, and there has
been hardly any real formal evaluation, the first results of touch
interactions with artificial entities appear promising. For instance,
people experience robots that interact by touch as less machinelike (Cramer et al. 2009). Yohanan and colleagues (Yohanan et al.
2005; Yohanan and MacLean 2012) designed several haptic creatures to study a robot’s communication of emotional state and
concluded that participants experienced a broader range of affect
when haptic renderings were applied. Basori et al. (2009) showed
the feasibility of using vibration in combination with sound and
facial expression in avatars to communicate emotion strength.
Touch also assists in building a relationship with social actors:
hand squeezes (delivered through an airbladder) can improve the
relation with a virtual agent (Bickmore et al. 2010). Artificial
hands equipped with synthetic skins can potentially replicate not
only the biomechanical behavior but also the warmth (the “feel”)
of the human hand (Cabibihan et al. 2009, 2010, 2011). Users
perceived a higher degree of friendship and social presence when
interacting with a zoomorphic social robot with a warmer skin
(Park and Lee 2014). Recent experiments indicate that the warmth
of a robotic hand mediating social touch contributed significantly
to the feeling of social presence (Nakanishi et al. 2014) and holding
a warm robot hand increased feelings of friendship and trust
toward a robot (Nie et al. 2012).
Kotranza and colleagues (Kotranza and Lok 2008; Kotranza
et al. 2009) describe a virtual patient as a medical student’s training
tool that is able to be touched and to touch back. These touchenabled virtual patients were treated more like real humans than
virtual patients without touch capabilities (students expressed
more empathy and used touch more frequently to comfort and
reassure the virtual patient).The authors concluded that by adding
haptic interaction to the virtual patient, the bandwidth of the
student-virtual patient communication increases and approaches
that of human–human communication. In a study on the interaction between toddlers and a small humanoid robot, Tanaka
et al. (2007) found that social connectedness correlated with the
amount of touch between the child and robot. In a study where
participants were asked to brush off “dirt” from either virtual
objects or virtual humans, they touched virtual humans with
6
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
less force than non-human objects, and they touched the face
of a virtual human with less force than the torso, while male
virtual humans were touched with more force than female virtual humans (Bailenson and Yee 2008). Huisman et al. (2014b)
performed a study in which participants played a collaborative
augmented reality game together with two virtual agents, visible in
the same augmented reality space. During interaction, one of the
virtual agents touched the user on the arm by means of a vibrotactile display. They found that the touching virtual agent was
rated higher on affective adjectives than the non-touching agent.
Finally, Nakagawa et al. (2011) created a situation in which a robot
requested participants to perform a repetitive monotonous task.
This request was accompanied by an active touch, a passive touch,
or no touch. The result showed that the active touch increased
people’s motivation to continue performing the monotonous task.
This confirms the earlier finding of Haans and IJsselsteijn (2009a)
that the effect of the virtual Midas touch is in the same order of
magnitude as the real Midas touch effect.
may stimulate the further development of mediated social touch
devices. Another research topic is the presumed close link between
social touch and emotions and the potential underlying neurophysiological mechanisms, i.e., the connection between social
touch and the emotional brain.
Multisensory and Contextual Cues
The meaning and appreciation of touch critically depend on its
context (Collier 1985; Camps et al. 2012), such as the relation
between conversation partners (Burgoon et al. 1992; Thompson
and Hampton 2011), the body location of the touch (Nguyen et al.
1975), and the communication partner’s culture (McDaniel and
Andersen 1998). There is no one-to-one correspondence between
a touch and its meaning (Jones and Yarbrough 1985). Hence, the
touch channel should be coupled with other sensory channels
to clarify its meaning (Wang and Quek 2010). An important
research question is which multisensory and contextual cues are
critical. Direct (i.e., unmediated) touch is usually a multisensory
experience: during interpersonal touch, we typically experience
not only tactile stimulation but also changes in warmth along with
verbal and non-verbal visual, auditory, and olfactory signals. Nonverbal cues (when people both see, hear, feel, and possibly smell
their interaction partner performing the touching) may render
mediated haptic technology more transparent, thereby increasing
perceived social presence and enhancing the convincingness or
immediacy of social touch (Haans and IJsselsteijn 2009b, 2010).
Also, since the sight of touch activates brain regions involved in
somatosensory processing [Rolls (2010); even watching a videotaped version: Walker and McGlone (2015)], the addition of visual
feedback may enhance the associated haptic experience. Another
strong cue for physical presence is body warmth. In human social
interaction, physical temperature also plays an important role
in sending interpersonal warmth (trust) information. Thermal
stimuli may therefore serve as a proxy for social presence and
stimulate the establishment of social relationships (IJzerman and
Semin 2010).
In addition to these bottom-up, stimulus driven aspects, topdown factors like expectations/beliefs of the receiver should be
accounted for (e.g., beliefs about the intent of the interaction
partner, familiarity with the partner, affordances of a physically
embodied agent, etc.) since they shape the perceived meaning
of touch (Burgoon and Walther 1990; Gallace and Spence 2010;
Suhonen et al. 2012b).
Research Topics
Mediated social touch is a relatively young field of research
that has the potential to substantially enrich human–human and
human–system interaction. Although it is still not clear to what
extent mediated touch can reproduce real touch, converging evidence seems to show that mediated touch shares important effects
with real touch. However, many studies have an anecdotal character without solid and/or generalizable conclusions and the key
studies in this field have not been replicated yet. This does not necessarily mean that the results are erroneous but it indicates that the
field has not matured enough and may suffer from a publication
bias. We believe that we need advancements in the following four
areas for the field to mature: building an overarching framework,
developing social touch basic building blocks, improving current
research methodologies, and solving specific ICT challenges.
Framework
The human skin in itself is a complex organ able to process many
different stimulus dimensions such as pressure, vibration, stretch,
and temperature (van Erp 2007). “Social touch” is what the brain
makes of these stimulus characteristics (sensations) taking into
account personality, previous experiences, social conventions, the
context, the object or person providing the touch, and probably
many more factors. The scientific domains involved in social
touch each have interesting research questions and answering
them helps the understanding of (real life or mediated) social
touch. In addition, we need an overarching framework to link
the results across disciplines, to foster multidisciplinary research,
and to encourage the transition from exploratory research to
hypothesis driven research.
Social and Cultural
Social touch has a strong (unwritten) etiquette (Cranny-Francis
2011). Important questions are how to develop a touch etiquette
for mediated touch and for social agents that can touch (van
Erp and Toet 2013), and how to incorporate social, cultural, and
individual differences with respect to acceptance and meaning
of a mediated or social agent’s touch. Individual differences may
include gender, attitude toward robots, and technology and touch
receptivity [the (dis-)liking of being touched, Bickmore et al.
2010]. An initial set of guidelines for this etiquette is given by
van Erp and Toet (2013). In addition, we should consider possible
ethical implications of the technology, ranging from affecting
people’s behavior without them being aware of it to the threat of
physical abuse “at a distance.”
Neuroscience
The recent finding that there exists a distinct somatotopic mapping between tactile sensations and different emotional feelings
(Nummenmaa et al. 2013; Walker and McGlone 2015) suggests
that it may also be of interest to determine a map of our responsiveness to interpersonal (mediated) touch across the skin surface (Gallace and Spence 2010). The availability of such a map
Frontiers in Digital Humanities | www.frontiersin.org
7
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Social Touch Building Blocks
Effect Measures
Gallace and Spence (2010) noted that even the most advanced
devices will not be able to deliver something that can approximate
realistic interpersonal touch if we do not know exactly what needs
to be communicated and how to communicate it. Our touch capabilities are very complex, and like mediated vision and audition,
mediated touch will always be degraded compared to real touch.
The question is how this degradation affects the effects aimed for.
A priori, mediated haptic communication should closely resemble non-mediated communication in order to be intuitively processed without introducing ambiguity or increasing the cognitive
load (Rantala et al. 2011). However, the results discussed in this
paper [e.g., Bailenson et al. (2007), Smith and MacLean (2007),
Haans and IJsselsteijn (2009a), Giannopoulos et al. (2011), and
Rantala et al. (2013)] indicate that social touch is quite robust to
degradations and it may not be necessary to mediate all physical
parameters accurately or at all.
However, it is currently not even clear how we can haptically
represent valence and arousal, let alone that we have robust knowledge on which parameters of the rich and complex touch characteristics are crucial in relation to the intended effects. Ideally, we
have a set of building blocks of social touch that can be applied
and combined depending on the situation.
Social touch can evoke effects at many different levels in the
receiver: physiological, psychological, behavioral, and social, and
it is likely that effects at these different levels also interact. For
instance, (social) presence and emotions can reciprocally reinforce each other. Currently, a broad range of effect measures is
applied, which makes it difficult to compare results, assess interactions between levels, and combine experimental results into
an integrated perspective. This pleads for setting a uniform set
of validated and standardized measures that covers the different
levels and that is robust and sensitive to the hypothesized effects of
social touch. This set could include basic physiological measures
known to vary with emotional experience [e.g., heart rate variability and skin conductance; Hogervorst et al. 2014]; psychological
and social measures reflecting trust, proximity, togetherness, and
social presence (IJsselsteijn et al. 2003; Van Bel et al. 2008; van Bel
et al. 2009), and behavioral measures, e.g., quantifying compliance
and performance. Please note though that each set of measures
will have its own pitfalls. For instance, see Brouwer et al. (2015)
for a critical reflection on the use of neurophysiological measures
to assess cognitive or mental state, and Bailenson and Yee (2008)
on the use of self-report questionnaires.
Specific ICT Challenges
Enabling ICT mediated, generated, and/or interpreted social
touch requires specific ICT knowledge and technology. We consider the following issues as most prominent.
Methodology
Not uncommon for research in the embryonic stage, mediated
social touch research is going through a phase of haphazard,
anecdotal studies demonstrating the concept and its’ potential. To
mature, the field needs rigorous replication and methodological
well-designed studies and protocols. The multidisciplinary nature
of the field adds to the diversity in research approaches.
Understanding Social Touches
With a few exceptions, mediated social touch studies are restricted
to producing a social touch and investigate its effects on a user.
To use social touch in interaction means that the system should
not only be able to generate social touches but also to receive
and understand social touches provided by human users. Taken
the richness of human touch into account, this is not trivial.
We may currently not even have the necessary sensor suite to
capture a social touch adequately, including parameters like sheer
and tangential forces, compliance, temperature, skin stretch, etc.
After adequate capturing, algorithms should determine the social
appraisal of the touch. Currently, the first attempts to capture
social touches with different emotional values on a single body
location (e.g., the arm) and to use computer algorithms to classify
them are undertaken (van Wingerden et al. 2014).
Controlled Studies
Only few studies have actually investigated mediated affect
conveyance, and compared mediated with unmediated touch.
Although it appears that mediated social touch can indeed to
some extent convey emotions (Bailenson et al. 2007) and induce
pro-social behavior [e.g., the Midas effect; Haans and IJsselsteijn
(2009a)], it is still not known to what extent it can also elicit strong
affective experiences (Haans and IJsselsteijn 2006) and how this all
compares to real touch or other control conditions.
Context Aware Computing and Social Signal
Processing
Protocols
Previous studies on mediated haptic interpersonal communication mainly investigated the communication of deliberately performed (instructed) rather than naturally occurring emotions
(Bailenson et al. 2007; Smith and MacLean 2007; Rantala et al.
2013). Although this protocol is very time efficient, it relies heavily
on participants’ ability to spontaneously generate social touches
with, for instance, a specific emotional value. This is comparable
to the research domain of facial expression where often trained
actors are used to produce expressions on demand. One may consider training people in producing social touches on demand or
employ a protocol (scenario) that naturally evokes specific social
signals rather than instruct naïve participants to produce them.
Frontiers in Digital Humanities | www.frontiersin.org
The meaning of a social touch is highly dependent on the accompanying verbal and non-verbal signals of the sender and the
context in which the touch is applied. An ICT system involved
in social touch interaction should take the relevant parameters
into account, both in generating touch and in interpreting touch.
To understand and manage social signals of a person, the system
is communicating with is the main challenge in the – in itself
relatively young – field of social signal processing (Vinciarelli et al.
2008). Context aware (Schilit et al. 1994) implies that the system
can sense its environment and reason about it in the context of
social touch.
8
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Current ICT advances like the embodiment of artificial entities,
the development of advanced haptic and tactile display technologies and standards (van Erp et al. 2010, including initial guidelines for mediated social touch: van Erp and Toet 2013) enable
the exploration of new ICT systems that employ this powerful
communication option, for instance, to enhance communication
between physically separated partners and increase trust in and
compliance with artificial entities. There are two prerequisites to
make these applications viable. First, inter-human social touch can
be ICT mediated, and second, social touch can be ICT generated
and understood, all without loss of effectiveness, efficiency, and
user satisfaction.
In this paper, we show that there is converging evidence that
both prerequisites can be met. Mediated social touch shows effects
at aforementioned levels, and these effects resemble those of a
real touch, even if the mediated touch is severely degraded. We
also report the first indications that a social touch can be generated by an artificial entity, although the evidence base is still
small. Moreover, the first steps are taken to develop algorithms
to automatically classify social touches produced by the user.
Our review also shows that (mediated) social touch is an
embryonic field relying for a large part on technology demonstrations with only a few systematic investigations. To advance
the field, we believe the focus should be on the following four
activities: developing an overarching framework (integrating neuroscience, computer science, and social and behavioral science),
developing basic social touch building blocks (based on the critical
social touch parameters), applying stricter research methodologies (use controlled studies, validated protocols, and standard
effect measures), and realizing breakthroughs in ICT (classifying
social touches, context aware computing, social signal processing,
congruence, and enhancing touch cues).
When we are successful in managing these challenges at the
crossroads of ICT and psychology, we believe that (mediated)
social touch can improve our wellbeing and quality of life, can
bridge the gap between real and virtual (social) worlds, and can
make artificial entities more human-like.
Congruency in Time, Space, and Semantics
As with most multimodal interactions, congruency of the signals in space, time, and meaning is of eminent importance. For
instance, touches should be congruent with other (mediated)
display modalities (visual, auditory, olfactory) to communicate
the intended meaning. In addition, congruence in time and space
between, for instance, a seen gesture and a resulting haptic sensation is required to support a common interaction metaphor
based on real touch. It has been shown that combining mediated
social touch with morphologically congruent imagery enhances
perceived social presence, whereas incongruent imagery results in
lower degrees of social presence (Haans and IJsselsteijn 2010).
Especially in closed-loop interaction (e.g., when holding or
shaking hands), signals that are out of sync may severely degrade
the interaction, thus requiring (near) real-time processing of touch
and other social signals and generation of adequate social touches
in reaction.
Enhancing Touch Cues
Social touch seems robust to degradations and mediated touch
does not need to replicate all physical parameters accurately. The
flipside of degradation is enhancement. Future research should
investigate to what extent the affective quality of the mediated
touch signals can be enhanced by the addition of other communication channels or by controlling specific touch parameters. Touch
parameters do not necessarily have to be mediated one-to-one,
but, for instance, temperature and force profiles may be either
amplified or attenuated. The additional options mediation can
provide to social touch have not been explored yet.
Conclusion
Social touch is of eminent importance in inter-human social communication and grounded in specific neurophysiological processing channels. Social touch can have effects at many levels including
physiological (heart rate and hormone levels), psychological (trust
in others), and sociological (pro-social behavior toward others).
In Proceedings of the International Conference on Computer Technology and
Development (ICCTD ‘09), 416–420. Piscataway, NJ: IEEE Press.
Bickmore, T.W., Fernando, R., Ring, L., and Schulman, D. 2010. Empathic touch by
relational agents. IEEE Trans. Affect. Comput. 1: 60–71. doi:10.1109/T-AFFC.
2010.4
Bickmore, T.W., and Picard, R.W. 2005. Establishing and maintaining longterm human-computer relationships. ACM Trans. Comput. Hum. Interact. 12:
293–327. doi:10.1145/1067860.1067867
Bohil, C.J., Alicea, B., and Biocca, F.A. 2011. Virtual reality in neuroscience research
and therapy. Nat. Rev. Neurosci. 12: 752–62. doi:10.1038/nrn3122
Bonanni, L., Vaucelle, C., Lieberman, J., and Zuckerman, O. 2006. TapTap: a haptic
wearable for asynchronous distributed touch therapy. In Proceedings of the
ACM Conference on Human Factors in Computing Systems CHI ‘06, 580–585.
New York, NY: ACM.
Breazeal, C. 2011. Social robots for health applications. In Proceedings of the IEEE
2011 Annual International Conference on Engineering in Medicine and Biology
(EMBC), 5368–5371. Piscataway, NJ: IEEE.
Brouwer, A.-M., Zander, T.O., van Erp, J.B.F., Korteling, J.E., and Bronkhorst,
A.W. 2015. Using neurophysiological signals that reflect cognitive or affective
state: six recommendations to avoid common pitfalls. Front. Neurosci. 9:136.
doi:10.3389/fnins.2015.00136
Burgoon, J.K. 1991. Relational message interpretations of touch, conversational
distance, and posture. J. Nonverbal Behav. 15:233–59. doi:10.1007/BF00986924
References
Altun, K., and MacLean, K.E. 2014. Recognizing affect in human touch of a robot.
Pattern Recognit. Lett. doi:10.1016/j.patrec.2014.10.016
App, B., McIntosh, D.N., Reed, C.L., and Hertenstein, M.J. 2011. Nonverbal channel
use in communication of emotion: how may depend on why. Emotion 11:
603–17. doi:10.1037/a0023164
Argall, B.D., and Billard, A.G. 2010. A survey of tactile human-robot interactions.
Rob. Auton. Syst. 58: 1159–76. doi:10.1016/j.robot.2010.07.002
Argyle, M. 1975. Bodily Communication. 2nd ed. London, UK: Methuen.
Bailenson, J.N., and Yee, N. 2007. Virtual interpersonal touch and digital
chameleons. J. Nonverbal Behav. 31: 225–42. doi:10.1007/s10919-007-0034-6
Bailenson, J.N., and Yee, N. 2008. Virtual interpersonal touch: haptic interaction
and copresence in collaborative virtual environments. Multimed. Tools Appl. 37:
5–14. doi:10.1007/s11042-007-0171-2
Bailenson, J.N., Yee, N., Brave, S., Merget, D., and Koslow, D. 2007. Virtual interpersonal touch: expressing and recognizing emotions through haptic devices. Hum.
Comput. Interact. 22: 325–53. doi:10.1080/07370020701493509
Basdogan, C., Ho, C.-H., Srinivasan, M.A., and Slater, M. 2000. An experimental
study on the role of touch in shared virtual environments. ACM Trans. Comput.
Hum. Interact. 7: 443–60. doi:10.1145/365058.365082
Basori, A.H., Bade, A., Sunar, M.S., Daman, D., and Saari, N. 2009. Haptic vibration
for emotional expression of avatar to enhance the realism of virtual reality.
Frontiers in Digital Humanities | www.frontiersin.org
9
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Burgoon, J.K., and Walther, J.B. 1990. Nonverbal expectancies and the evaluative consequences of violations. Hum. Comm. Res. 17: 232–65. doi:10.1111/j.
1468-2958.1990.tb00232.x
Burgoon, J.K., Walther, J.B., and Baesler, E.J. 1992. Interpretations, evaluations, and
consequences of interpersonal touch. Hum. Comm. Res. 19: 237–63. doi:10.1111/
j.1468-2958.1992.tb00301.x
Bush, E. 2001. The use of human touch to improve the well-being of older adults:
A holistic nursing intervention. J. Holist. Nurs. 19(3):256–270. doi:10.1177/
089801010101900306
Cabibihan, J.-J., Ahmed, I., and Ge, S.S. 2011. Force and motion analyses of the
human patting gesture for robotic social touching. In Proceedings of the 2011
IEEE 5th International Conference on Cybernetics and Intelligent Systems (CIS),
165–169. Piscataway, NJ: IEEE.
Cabibihan, J.-J., Jegadeesan, R., Salehi, S., and Ge, S.S. 2010. Synthetic skins with
humanlike warmth. In Social Robotics, Edited by S. Ge, H. Li, J.J. Cabibihan, and
Y. Tan, 362–371. Berlin: Springer.
Cabibihan, J.-J., Pradipta, R., Chew, Y., and Ge, S. 2009. Towards humanlike social
touch for prosthetics and sociable robotics: Handshake experiments and finger
phalange indentations. In Advances in Robotics, Edited by J.H. Kim, S. Ge, P.
Vadakkepat, N. Jesse, A. Al Manum, K. Puthusserypady, et al., 73–79 Berlin:
Springer. doi:10.1007/978-3-642-03983-6_11
Cabibihan, J.-J., Zheng, L., and Cher, C.K.T. 2012. Affective tele-touch. In Social
Robotics, Edited by S. Ge, O. Khatib, J.J. Cabibihan, R. Simmons, and M.A.
Williams, 348–356. Berlin: Springer.
Calvo, R.A., and D’Mello, S. 2010. Affect detection: an interdisciplinary review of
models, methods, and their applications. IEEE Trans. Affect. Comput. 1: 18–37.
doi:10.1109/T-AFFC.2010.1
Camps, J., Tuteleers, C., Stouten, J., and Nelissen, J. 2012. A situational touch: how
touch affects people’s decision behavior. Social Influence 8: 237–50. doi:10.1080/
15534510.2012.719479
Cannon, W.B. 1927. The James-Lange theory of emotions: a critical examination
and an alternative theory. Am. J. Psychol. 39: 106–24. doi:10.2307/1415404
Cha, J., Eid, M., Barghout, A., and Rahman, A.M. 2008. HugMe: an interpersonal
haptic communication system. In IEEE International Workshop on Haptic Audio
visual Environments and Games (HAVE 2008), 99–102. Piscataway, NJ: IEEE.
Chang, S.O. 2001. The conceptual structure of physical touch in caring. J. Adv. Nurs.
33(6): 820–827. doi:10.1046/j.1365-2648.2001.01721.x
Chatel-Goldman, J., Congedo, M., Jutten, C., and Schwartz, J.L. 2014. Touch
increases autonomic coupling between romantic partners. Front. Behav. Neurosci. 8:95. doi:10.3389/fnbeh.2014.00095
Chen, T.L., King, C.-H., Thomaz, A.L., and Kemp, C.C. 2011. Touched by a robot: an
investigation of subjective responses to robot-initiated touch. In Proceedings of
the 6th International Conference on Human-Robot Interaction HRI ‘11, 457–464.
New York, NY: ACM.
Chung, K., Chiu, C., Xiao, X., and Chi, P.Y.P. 2009. Stress outsourced: a haptic social
network via crowdsourcing. In CHI ‘09 Extended Abstracts on Human Factors
in Computing Systems, Edited by D.R. Olsen, K. Hinckley, M. Ringel-Morris,
S. Hudson and S. Greenberg, 2439–2448. New York, NY: ACM. doi:10.1145/
1520340.1520346
Coan, J.A., Schaefer, H.S., and Davidson, R.J. 2006. Lending a hand: social regulation of the neural response to threat. Psychol. Sci. 17: 1032–9. doi:10.1111/j.
1467-9280.2006.01832.x
Collier, G. 1985. Emotional Expression. Hillsdale, NJ: Lawrence Erlbaum Associates
Inc.
Cooney, M.D., Nishio, S., and Ishiguro, H. 2012. Recognizing affection for a touchbased interaction with a humanoid robot. In IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS), 1420–1427. Piscataway, NJ: IEEE.
Cooney, M.D., Nishio, S., and Ishiguro, H. 2014. Importance of touch for conveying
affection in a multimodal interaction with a small humanoid robot. Int. J. Hum.
Rob. 12(1): 1550002. doi:10.1142/S0219843615500024
Cooper, E.A., Garlick, J., Featherstone, E., Voon, V., Singer, T., Critchley, H.D.,
et al. 2014. You turn me cold: evidence for temperature contagion. PLoS One
9:e116126. doi:10.1371/journal.pone.0116126
Cramer, H., Kemper, N., Amin, A., Wielinga, B., and Evers, V. 2009. “Give me a
hug”: the effects of touch and autonomy on people’s responses to embodied
social agents. Comput. Anim. Virtual Worlds 20: 437–45. doi:10.1002/cav.317
Cranny-Francis, A. 2011. Semefulness: a social semiotics of touch. Soc. Semiotics 21:
463–81. doi:10.1080/10350330.2011.591993
Frontiers in Digital Humanities | www.frontiersin.org
Crusco, A.H., and Wetzel, C.G. 1984. The midas touch: the effects of interpersonal touch on restaurant tipping. Pers. Soc. Psychol. B 10: 512–7. doi:10.1177/
0146167284104003
Dahiya, R.S., Metta, G., Valle, M., and Sandini, G. 2010. Tactile sensing – from
humans to humanoids. IEEE Trans. Rob. 26: 1–20. doi:10.1109/TRO.2009.
2033627
Damasio, A. 1999. The Feeling of What Happens: Body and Emotion in the Making
of Consciousness. London, UK: Heinemann.
Debrot, A., Schoebi, D., Perrez, M., and Horn, A.B. 2013. Touch as an interpersonal emotion regulation process in couples’ daily lives: the mediating
role of psychological intimacy. Pers. Soc. Psychol. B 39: 1373–85. doi:10.1177/
0146167213497592
Dev, P., Youngblood, P., Heinrichs, W.L., and Kusumoto, L. 2007. Virtual worlds
and team training. Anesthesiol. Clin. 25: 321–36. doi:10.1016/j.anclin.2007.03.
001
Dijk, E.O., Nijholt, A., van Erp, J.B.F., Wolferen, G.V., and Kuyper, E. 2013. Audiotactile stimulation: a tool to improve health and well-being? Int. J. Auton. Adapt.
Commun. Syst. 6: 305–23. doi:10.1504/IJAACS.2013.056818
Ditzen, B., Neumann, I.D., Bodenmann, G., von Dawans, B., Turner, R.A., Ehlert,
U., et al. 2007. Effects of different kinds of couple interaction on cortisol and
heart rate responses to stress in women. Psychoneuroendocrinology 32: 565–74.
doi:10.1016/j.psyneuen.2007.03.011
Dolin, D.J., and Booth-Butterfield, M. 1993. Reach out and touch someone: analysis of nonverbal comforting responses. Commun. Q. 41: 383–93. doi:10.1080/
01463379309369899
Dougherty, E., and Scharfe, H. 2011. Initial formation of trust: designing an interaction with geminoid-DK to promote a positive attitude for cooperation. In
Social Robotics, Edited by B. Mutlu, C. Bartneck, J. Ham, V. Evers, and T. Kanda,
95–103. Berlin: Springer.
Drescher, V.M., Gantt, W.H., and Whitehead, W.E. 1980. Heart rate response to
touch. Psychosom. Med. 42: 559–65. doi:10.1097/00006842-198011000-00004
Eichhorn, E., Wettach, R., and Hornecker, E. 2008. A stroking device for spatially
separated couples. In Proceedings of the 10th International Conference on Human
Computer Interaction with Mobile Devices and Services Mobile HCI ‘08, 303–306.
New York, NY: ACM.
Ellingsen, D.M., Wessberg, J., Chelnokova, O., Olausson, H., Aeng, B., and Eknes,
S. 2014. In touch with your emotions: oxytocin and touch change social impressions while others’ facial expressions can alter touch. Psychoneuroendocrinology
39: 11–20. doi:10.1016/j.psyneuen.2013.09.017
Field, T. 2010. Touch for socioemotional and physical well-being: a review. Dev. Rev.
30: 367–83. doi:10.1016/j.dr.2011.01.001
Fisher, J.D., Rytting, M., and Heslin, R. 1976. Hands touching hands: affective and
evaluative effects of an interpersonal touch. Sociometry 39: 416–21. doi:10.2307/
3033506
Flowers, M.G., and Aggarwal, R. 2014. Second LifeTM : a novel simulation platform
for the training of surgical residents. Expert Rev. Med. Devices 11: 101–3. doi:10.
1586/17434440.2014.863706
Fong, T., Nourbakhsh, I., and Dautenhahn, K. 2003. A survey of socially
interactive robots. Rob. Auton. Syst. 42: 143–66. doi:10.1016/S0921-8890(02)
00372-X
Frumin, I., Perl, O., Endevelt-Shapira, Y., Eisen, A., Eshel, N., Heller, I., et al.
2015. A social chemosignaling function for human handshaking. Elife 4: e05154.
doi:10.7554/eLife.05154
Furukawa, M., Kajimoto, H., and Tachi, S. 2012. KUSUGURI: a shared tactile
interface for bidirectional tickling. In Proceedings of the 3rd Augmented Human
International Conference AH ‘12, 1–8. New York, NY: ACM.
Gallace, A., Ngo, M.K., Sulaitis, J., and Spence, C. 2012. Multisensory presence in
virtual reality: possibilities & limitations. In Multiple Sensorial Media Advances
and Applications: New Developments in MulSeMedia, Edited by G. Ghinea, F.
Andres and S.R. Gulliver, 1–40. Vancouver, BC: IGI Global.
Gallace, A., and Spence, C. 2010. The science of interpersonal touch: an overview.
Neurosci. Biobehav. Rev. 34: 246–59. doi:10.1016/j.neubiorev.2008.10.004
Gao, Y., Bianchi-Berthouze, N., and Meng, H. 2012. What does touch tell us about
emotions in touchscreen-based gameplay? ACM Trans. Comput. Hum. Interact.
19: 1–30. doi:10.1145/2395131.2395138
Giannopoulos, E., Wang, Z., Peer, A., Buss, M., and Slater, M. 2011. Comparison of
people’s responses to real and virtual handshakes within a virtual environment.
Brain Res. Bull. 85: 276–82. doi:10.1016/j.brainresbull.2010.11.012
10
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Gleeson, M., and Timmins, F. 2005. A review of the use and clinical effectiveness of
touch as a nursing intervention. Clin. Effect. Nurs. 9: 69–77. doi:10.1016/j.cein.
2004.12.002
Gooch, D., and Watts, L. 2010. Communicating social presence through thermal
hugs. In Proceedings of First Workshop on Social Interaction in Spatially Separated
Environments (SISSI2010), Edited by F. Schmid, T. Hesselmann, S. Boll, K.
Cheverst, and L. Kulik, 11–19. Copenhagen: International Society for Presence
Research.
Gooch, D., and Watts, L. 2012. Yourgloves, hothands and hotmits: devices to hold
hands at a distance. In Proceedings of the 25th Annual ACM Symposium on User
Interface Software and Technology UIST ‘12, 157–166. New York, NY: ACM.
Gordon, I., Voos, A.C., Bennett, R.H., Bolling, D.Z., Pelphrey, K.A. and Kaiser,
M.D. 2013. Brain mechanisms for processing affective touch. Hum. Brain Mapp.
34(4):914–922. doi:10.1002/hbm.21480
Gottlieb, G. 1971. Ontogenesis of sensory function in birds and mammals. In The
Biopsychology of Development, Edited by E. Tobach, L.R. Aronson, and E. Shaw,
67–128. New York, NY: Academic Press.
Grewen, K.M., Anderson, B.J., Girdler, S.S., and Light, K.C. 2003. Warm partner
contact is related to lower cardiovascular reactivity. Behav. Med. 29: 123–30.
doi:10.1080/08964280309596065
Guéguen, N. 2002. Touch, awraness of touch, and compliance with a request.
Percept. Mot. Skills 95: 355–60. doi:10.2466/pms.2002.95.2.355
Guéguen, N. 2004. Nonverbal encouragement of participation in a course: the effect
of touching. Soc. Psychol. Educ. 7: 89–98. doi:10.1023/B:SPOE.0000010691.
30834.14
Guéguen, N., and Fischer-Lokou, J. 2003. Tactile contact and spontaneous help:
an evaluation in a natural setting. J. Soc. Psychol. 143: 785–7. doi:10.1080/
00224540309600431
Guéguen, N., Meineri, S., and Charles-Sire, V. 2010. Improving medication adherence by using practitioner nonverbal techniques: a field experiment on the effect
of touch. J. Behav. Med. 33: 466–73. doi:10.1007/s10865-010-9277-5
Haans, A., and IJsselsteijn, W.A. 2006. Mediated social touch: a review of current research and future directions. Virtual Reality 9: 149–59. doi:10.1007/
s10055-005-0014-2
Haans, A., and IJsselsteijn, W.A. (2009a). The virtual Midas Touch: helping behavior
after a mediated social touch. IEEE Trans. Haptics 2: 136–40. doi:10.1109/TOH.
2009.20
Haans, A., and IJsselsteijn, W.A. (2009b). I’m always touched by your presence, dear:
combining mediated social touch with morphologically correct visual feedback.
In Proceedings of Presence 2009, 1–6. Los Angeles, CA: International Society for
Presence Research.
Haans, A., and IJsselsteijn, W.A. 2010. Combining mediated social touch with
vision: from self-attribution to telepresence? In Proceedings of Special Symposium
at EuroHaptics 2010: Haptic and Audio-Visual Stimuli: Enhancing Experiences
and Interaction, Edited by A. Nijholt, E.O. Dijk, and P.M.C. Lemmens, 35–46
Enschede:University of Twente.
Henricson, M., Ersson, A., Määttä, S., Segesten, K., and Berglund, A.L. 2008. The
outcome of tactile touch on stress parameters in intensive care: A randomized
controlled trial. Complement. Ther. Clin. Prac. 14(4): 244–254.
Harlow, H.F., and Zimmermann, R.R. 1959. Affectional responses in the infant
monkey; orphaned baby monkeys develop a strong and persistent attachment
to inanimate surrogate mothers. Science 130: 421–32. doi:10.1126/science.130.
3373.421
Hertenstein, M.J., Holmes, R., McCullough, M., and Keltner, D. 2009. The communication of emotion via touch. Emotion 9: 566–73. doi:10.1037/a0016108
Hertenstein, M.J., Keltner, D., App, B., Bulleit, B.A., and Jaskolka, A.R. 2006. Touch
communicates distinct emotions. Emotion 6: 528–33. doi:10.1037/1528-3542.6.
3.528
Hogervorst, M.A., Brouwer, A.-M., and van Erp, J.B.F. 2014. Combining and comparing EEG, peripheral physiology and eye-related measures for the assessment
of mental workload. Front. Neurosci. 8:322. doi:10.3389/fnins.2014.00322
Hornik, J. 1992. Tactile stimulation and consumer response. J. Consum. Res. 19:
449–58. doi:10.1086/209314
Hossain, S.K.A., Rahman, A.S.M.M., and El Saddik, A. 2011. Measurements of
multimodal approach to haptic interaction in second life interpersonal communication system. IEEE Trans. Instrum. Meas. 60: 3547–58. doi:10.1109/TIM.
2011.2161148
Huisman, G., Bruijnes, M., Kolkmeier, J., Jung, M.M., Darriba Frederiks, A., and
Rybarczyk, Y. (2014a). Touching virtual agents: embodiment and mind. In
Innovative and Creative Developments in Multimodal Interaction Systems, Edited
Frontiers in Digital Humanities | www.frontiersin.org
by Y. Rybarczyk, T. Cardoso, J. Rosas, and L. Camarinha-Matos, 114–138.
Berlin: Springer.
Huisman, G., Kolkmeier, J., and Heylen, D. (2014b). Simulated social touch in a collaborative game. In Haptics: Neuroscience, Devices, Modeling, and Applications,
Edited by M. Auvray and C. Duriez, 248–256. Berlin: Springer.
Huisman, G., Darriba Frederiks, A., Van Dijk, E.M.A.G., Kröse, B.J.A., and Heylen,
D.K.J. 2013. Self touch to touch others: designing the tactile sleeve for social
touch. In Online Proceedings of TEI’13. Available at: http://www.tei-conf.org/13/
sites/default/files/page-files/Huisman.pdf
Hutson, S., Lim, S., Bentley, P.J., Bianchi-Berthouze, N., and Bowling, A. 2011.
Investigating the suitability of social robots for the wellbeing of the elderly. In
Affective Computing and Intelligent Interaction, Edited by S. D’Mello, A. Graesser,
B. Schuller, and J.C. Martin, 578–587. Berlin: Springer.
IJsselsteijn, W.A., van Baren, J., and van Lanen, F. 2003. Staying in touch: social
presence and connectedness through synchronous and asynchronous communication media (Part III). In Human-Computer Interaction: Theory and Practice,
Edited by C. Stephanidis and J. Jacko, 924–928. Boca Raton, FL: CRC Press.
IJzerman, H., and Semin, G.R. 2009. The thermometer of social relations: mapping social proximity on temperature. Psychol. Sci. 20: 1214–20. doi:10.1111/j.
1467-9280.2009.02434.x
IJzerman, H., and Semin, G.R. 2010. Temperature perceptions as a ground for social
proximity. J. Exp. Soc. Psychol. 46: 867–73. doi:10.1016/j.jesp.2010.07.015
James, W. 1884. What is an emotion? Mind 9: 188–205. doi:10.1093/mind/os-IX.
34.188
Jones, S.E., and Yarbrough, A.E. 1985. A naturalistic study of the meanings of touch.
Comm. Monogr. 52: 19–56. doi:10.1080/03637758509376094
Joule, R.V., and Guéguen, N. 2007. Touch, compliance, and awareness of tactile
contact. Percept. Mot. Skills 104: 581–8. doi:10.2466/pms.104.2.581-588
Jung, M.M., Poppe, R., Poel, M., and Heylen, D.K.J. 2014. Touching the VOID –
introducing CoST: corpus of social touch. In Proceedings of the 16th International
Conference on Multimodal Interaction (ICMI ‘14), 120–127. New York, NY:
ACM.
Kanamori, M., Suzuki, M., Oshiro, H., Tanaka, M., Inoguchi, T., Takasugi, H.,
et al. 2003. Pilot study on improvement of quality of life among elderly using
a pet-type robot. In Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation, 107–112. Piscataway, NJ:
IEEE.
Khanna, P., and Sasikumar, M. 2010. Recognising emotions from keyboard stroke
pattern. Int. J. Comput. Appl. 11: 1–5. doi:10.5120/1614-2170
Kjeldskov, J., Gibbs, M., Vetere, F., Howard, S., Pedell, S., Mecoles, K., et al. 2004.
Using cultural probes to explore mediated intimacy. Australas J. Inf. Syst. 11.
Available at: http://journal.acs.org.au/index.php/ajis/article/view/128
Kleinke, C.L. 1977. Compliance to requests made by gazing and touching experimenters in field settings. J. Exp. Soc. Psychol. 13: 218–23. doi:10.1016/
0022-1031(77)90044-0
Knapp, M.L., and Hall, J.A. 2010. Nonverbal Communication in Human Interaction
(7th ed.). Boston, MA: Wadsworth, CENGAGE Learning.
Konijn, E.A., Utz, S., Tanis, M., and Barnes, S.B. 2008. Mediated Interpersonal
Communication. New York, NY: Routledge.
Kotranza, A., and Lok, B. 2008. Virtual human + tangible interface = mixed reality
human: an initial exploration with a virtual breast exam patient. In Proceedings
of the IEEE Virtual Reality Conference 2008 (VR ‘08), 99–106. Piscataway, NJ:
IEEE.
Kotranza, A., Lok, B., Deladisma, A., Pugh, C.M., and Lind, D.S. 2009. Mixed
reality humans: evaluating behavior, usability, and acceptability. IEEE Trans. Vis.
Comput. Graph. 15: 369–82. doi:10.1109/TVCG.2008.195
Kraus, M.W., Huang, C., and Keltner, D. 2010. Tactile communication, cooperation,
and performance: an ethological study of the NBA. Emotion 10: 745–9. doi:10.
1037/a0019382
Kristoffersson, A., Coradeschi, S., and Loutfi, A. 2013. A review of mobile
robotic telepresence. Adv. Hum. Comput. Int. 2013: 1–17. doi:10.1155/2013/
902316
Kuwamura, K., Sakai, K., Minato, T., Nishio, S., and Ishiguro, H. 2013. Hugvie: a
medium that fosters love. In The 22nd IEEE International Symposium on Robot
and Human Interactive Communication, 70–75. Gyeongju: IEEE.
Kvam, M.H. 1997. The effect of vibroacoustic therapy. Physiotherapy 83: 290–5.
doi:10.1016/S0031-9406(05)66176-7
Ledbetter, A.M. 2014. The past and future of technology in interpersonal communication theory and research. Commun. Stud. 65: 456–9. doi:10.1080/10510974.
2014.927298
11
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Lee, J.K., Stiehl, W.D., Toscano, R.L., and Breazeal, C. 2009. Semi-autonomous robot
avatar as a medium for family communication and education. Adv. Rob. 23:
1925–49. doi:10.1163/016918609X12518783330324
Lee, K.M., Jung, Y., Kim, J., and Kim, S.R. 2006. Are physically embodied social
agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people’s loneliness in human-robot interaction.
Int. J. Hum. Comput. Stud. 64: 962–73. doi:10.1016/j.ijhcs.2006.05.002
Lemmens, P., Crompvoets, F., Brokken, D., van den Eerenbeemd, J., and de
Vries, G.J. 2009. A body-conforming tactile jacket to enrich movie viewing. In
EuroHaptics Conference, 2009 and Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems. World Haptics 2009. Third Joint, 7–12.
Piscataway, NJ: IEEE Press.
Li, H., Cabibihan, J.-J., and Tan, Y. 2011. Towards an effective design of social robots.
Int. J. Soc. Rob. 3: 333–5. doi:10.1007/s12369-011-0121-z
Löken, L.S., Wessberg, J., Morrison, I., McGlone, F., and Olausson, H. 2009. Coding
of pleasant touch by unmyelinated afferents in humans. Nat. Neurosci. 12(5):
547–548. doi:10.1038/nn.2312
Lv, H.-R., Lin, Z.-L., Yin, W.-J., and Dong, J. 2008. Emotion recognition based on
pressure sensor keyboards. In IEEE International Conference on Multimedia and
Expo 2008, 1089–1092. Piscataway, NJ: IEEE.
Master, S.L., Eisenberger, N.I., Taylor, S.E., Naliboff, B.D., Shirinyan, D., and Lieberman, M.D. 2009. A picture’s worth: partner photographs reduce experimentally
induced pain. Psychol. Sci. 20: 1316–8. doi:10.1111/j.1467-9280.2009.02444.x
McCall, C., and Blascovich, J. 2009. How, when, and why to use digital experimental
virtual environments to study social behavior. Soc. Pers. Psychol. Compass 3:
744–58. doi:10.1111/j.1751-9004.2009.00195.x
McCance, R.A., and Otley, M. 1951. Course of the blood urea in newborn rats, pigs
and kittens. J. Physiol. 113: 18–22. doi:10.1113/jphysiol.1951.sp004552
McDaniel, E., and Andersen, P.A. 1998. International patterns of interpersonal
tactile communication: a field study. J. Nonverbal Behav. 22: 59–75. doi:10.1023/
A:1022952509743
McGlone, F., Wessberg, J., and Olausson, H. 2014. Discriminative and affective touch: sensing and feeling. Neuron 82: 737–55. doi:10.1016/j.neuron.2014.
05.001
Mehrabian, A. 1972. Nonverbal Communication. Chicago, IL: Aldine-Atherton.
Mikropoulos, T.A., and Natsis, A. 2011. Educational virtual environments: a tenyear review of empirical research (1999-2009). Comput. Educ. 56: 769–80. doi:10.
1016/j.compedu.2010.10.020
Montagu, A. 1972. Touching: The Human Significance of the Skin. New York, NY:
Harper & Row Publishers.
Morrison, I., Löken, L., and Olausson, H. 2010. The skin as a social organ. Exp. Brain
Res. 204: 305–14. doi:10.1007/s00221-009-2007-y
Morrison, I., Björnsdotter, M., and Olausson, H. 2011. Vicarious responses to
social touch in posterior insular cortex are tuned to pleasant caressing speeds.
J. Neurosci. 31(26):9554–9562.
Mueller, F., Vetere, F., Gibbs, M.R., Kjeldskov, J., Pedell, S., and Howard, S. 2005.
Hug over a distance. In CHI ‘05 Extended Abstracts on Human Factors in
Computing Systems, Edited by G. van der Veer and C. Gale, 1673–1676. New
York, NY: ACM. doi:10.1145/1056808.1056994
Nakagawa, K., Shiomi, M., Shinozawa, K., Matsumura, R., Ishiguro, H., and
Hagita, N. 2011. Effect of robot’s active touch on people’s motivation. In Proceedings of the 6th International Conference on Human-Robot Interaction HRI
‘11, 465–472. New York, NY: ACM.
Nakanishi, H., Tanaka, K., and Wada, Y. 2014. Remote handshaking: touch enhances
video-mediated social telepresence. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems CHI’14, 2143–2152. New York, NY: ACM.
Nakanishi, J., Kuwamura, K., Minato, T., Nishio, S., and Ishiguro, H. 2013. Evoking
affection for a communication partner by a robotic communication medium. In
The First International Conference on Human-Agent Interaction (iHAI 2013), 1–8.
Available at: http://hai-conference.net/ihai2013/proceedings/pdf/III-1-4.pdf
Naya, F., Yamato, J., and Shinozawa, K. 1999. Recognizing human touching behaviors using a haptic interface for a pet-robot. In Conference Proceedings of the
IEEE International Conference on Systems, Man, and Cybernetics (IEEE SMC ‘99),
1030–1034. Piscataway, NJ: IEEE.
Nguyen, N., Wachsmuth, I., and Kopp, S. 2007. Touch perception and emotional
appraisal for a virtual agent. In Proceedings of the 2nd Workshop Emotion and
Computing-Current Research and Future Impact, Edited by D. Reichardt and
P. Levi, 17–22. Stuttgart: Berufsakademie Stuttgart.
Frontiers in Digital Humanities | www.frontiersin.org
Nguyen, T., Heslin, R., and Nguyen, M.L. 1975. The meanings of
touch: sex differences. J. Commun. 25: 92–103. doi:10.1111/j.1460-2466.
1975.tb00610.x
Nie, J., Park, M., Marin, A.L., and Sundar, S.S. 2012. Can you hold my hand? Physical
warmth in human-robot interaction. In Proceeedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 201–202. Piscataway,
NJ: IEEE.
Nijholt, A. 2014. Breaking fresh ground in human-media interaction research.
Front. ICT 1:4. doi:10.3389/fict.2014.00004
Nummenmaa, L., Glerean, E., Hari, R., and Hietanen, J.K. 2013. Bodily maps
of emotions. Proc. Natl. Acad. Sci. U.S.A. 111: 646–51. doi:10.1073/pnas.
1321664111
Olausson, H.W., Cole, J., Vallbo, Å, McGlone, F., Elam, M., Krämer, H.H., et al. 2008.
Unmyelinated tactile afferents have opposite effects on insular and somatosensory cortical processing. Neurosci. Lett. 436: 128–32. doi:10.1016/j.neulet.2008.
03.015
Paladino, M.P., Mazzurega, M., Pavani, F., and Schubert, T.W. 2010. Synchronous
multisensory stimulation blurs self-other boundaries. Psychol. Sci. 21: 1202–7.
doi:10.1177/0956797610379234
Park, E., and Lee, J. 2014. I am a warm robot: the effects of temperature
in physical human – robot interaction. Robotica 32: 133–42. doi:10.1017/
S026357471300074X
Park, Y.W., Bae, S.H., and Nam, T.J. 2012. How do Couples Use CheekTouch Over
Phone Calls? New York, NY: ACM. 763–6.
Park, Y.W., Hwang, S., and Nam, T.J. 2011. Poke: emotional touch delivery through
an inflatable surface over interpersonal mobile communications. In Adjunct
Proceedings of the 24th Annual ACM Symposium Adjunct on User Interface
Software and Technology UIST ‘11, 61–62. New York, NY: ACM.
Patrick, G. 1999. The effects of vibroacoustic music on symptom reduction. IEEE
Eng. Med. Biol. Mag. 18: 97–100. doi:10.1109/51.752987
Phelan, J.E. 2009. Exploring the use of touch in the psychotherapeutic setting: a phenomenological review. Psychotherapy (Chic) 46: 97–111. doi:10.1037/a0014751
Prisby, R.D., Lafage-Proust, M.-H., Malaval, L., Belli, A., and Vico, L. 2008. Effects
of whole body vibration on the skeleton and other organ systems in man and
animal models: what we know and what we need to know. Ageing Res. Rev. 7:
319–29. doi:10.1016/j.arr.2008.07.004
Puhan, M.A., Suarez, A., Cascio, C.L., Zahn, A., Heitz, M., and Braendli, O. 2006.
Didgeridoo playing as alternative treatment for obstructive sleep apnoea syndrome: randomised controlled trial. BMJ 332: 266–70. doi:10.1136/bmj.38705.
470590.55
Rahman, A.S.M.M., and El Saddik, A. 2011. HKiss: real world based haptic interaction with virtual 3D avatars. In Proceedings of the 2011 IEEE International
Conference on Multimedia and Expo (ICME), 1–6. Piscataway, NJ: IEEE.
Rantala, J., Raisamo, R., Lylykangas, J., Ahmaniemi, T., Raisamo, J., Rantala, J., et al.
2011. The role of gesture types and spatial feedback in haptic communication.
IEEE Trans. Haptics 4: 295–306. doi:10.1109/TOH.2011.4
Rantala, J., Salminen, K., Raisamo, R., and Surakka, V. 2013. Touch gestures in
communicating emotional intention via vibrotactile stimulation. Int. J. Hum.
Comput. Stud. 7: 679–90. doi:10.1016/j.ijhcs.2013.02.004
Réhman, S., and Liu, L. 2010. iFeeling: vibrotactile rendering of human emotions on
mobile phones. In Mobile Multimedia Processing, Edited by X. Jiang, M.Y. Ma,
and C. Chen, 1–20. Berlin: Springer.
Robinson, H., MacDonald, B., Kerse, N., and Broadbent, E. 2013. The psychosocial
effects of a companion robot: a randomized controlled trial. J. Am. Med. Dir.
Assoc. 14: 661–7. doi:10.1016/j.jamda.2013.02.007
Rolls, E.T. 2010. The affective and cognitive processing of touch, oral texture, and
temperature in the brain. Neurosci. Biobehav. Rev. 34: 237–45. doi:10.1016/j.
neubiorev.2008.03.010
Saadatian, E., Samani, H., Parsani, R., Pandey, A.V., Li, J., Tejada, L., et al. 2014.
Mediating intimacy in long-distance relationships using kiss messaging. Int. J.
Hum. Comput. Stud. 72: 736–46. doi:10.1016/j.ijhcs.2014.05.004
Sallnäs, E.L. 2010. Haptic feedback increases perceived social presence. In Haptics:
Generating and Perceiving Tangible Sensations, Part II, Edited by A.M. Kappers,
J.B. Erp, W.M. Bergmann Tiest, and F.C. Helm, 178–185. Berlin: Springer.
Salminen, K., Surakka, V., Lylykangas, J., Raisamo, R., Saarinen, R., Raisamo,
R., et al. 2008. Emotional and behavioral responses to haptic stimulation. In
Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors
in Computing Systems, 1555–1562. New York, NY: ACM Press.
12
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Scheele, D., Kendrick, K.M., Khouri, C., Kretzer, E., Schläpfer, T.E., StoffelWagner, B., et al. 2014. An oxytocin-induced facilitation of neural and emotional
responses to social touch correlates inversely with autism traits. Neuropsychopharmacology 39: 2078–85. doi:10.1038/npp.2014.78
Schilit, B., Adams, N., and Want, R. 1994. Context-aware computing applications. In
First Workshop on Mobile Computing Systems and Applications (WMCSA 1994),
85–90. Piscataway, NJ: IEEE.
Self, B.P., van Erp, J.B.F., Eriksson, L., and Elliott, L.R. 2008. Human factors issues
of tactile displays for military environments. In Tactile Displays for Navigation,
Orientation and Communication in Military Environments, Edited by J.B.F. van
Erp and B.P. Self, 3. Neuillu-sur-Seine: NATO RTO.
Shermer, M. 2004. A bounty of science. Sci. Am. 290: 33. doi:10.1038/
scientificamerican0204-33
Shin, H., Lee, J., Park, J., Kim, Y., Oh, H., and Lee, T. 2007. A tactile emotional
interface for instant messenger chat. In Proceedings of the 2007 Conference
on Human Interface, Edited by M.J. Smith and G. Salvendy, 166–175. Berlin:
Springer.
Smith, J., and MacLean, K. 2007. Communicating emotion through a haptic link:
design space and methodology. Int. J. Hum. Comput. Stud. 65: 376–87. doi:10.
1016/j.ijhcs.2006.11.006
Solon, O. 2015. These sex tech toys will blow your mind. WIRED. Available at:
http://www.wired.co.uk/news/archive/2014-06/27/sex-tech
Stiehl, W.D., Lieberman, J., Breazeal, C., Basel, L., Lalla, L., and Wolf, M. 2005.
Design of a therapeutic robotic companion for relational, affective touch. In
IEEE International Workshop on Robot and Human Interactive Communication
(ROMAN 2005), 408–415. Piscataway, NJ: IEEE.
Suhonen, K., Müller, S., Rantala, J., Väänänen-Vainio-Mattila, K., Raisamo, R., and
Lantz, V. (2012a). Haptically augmented remote speech communication: a study
of user practices and experiences. In Proceedings of the 7th Nordic Conference
on Human-Computer Interaction: Making Sense Through Design NordiCHI ‘12,
361–369. New York, NY: ACM.
Suhonen, K., Väänänen-Vainio-Mattila, K., and Mäkelä, K. (2012b). User experiences and expectations of vibrotactile, thermal and squeeze feedback in interpersonal communication. In Proceedings of the 26th Annual BCS Interaction
Specialist Group Conference on People and Computers BCS-HCI ‘12, 205–214.
New York, NY: ACM.
Suk, H.-J., Jeong, S.-H., Hang, T.-H., and Kwon, D.-S. 2009. Tactile sensation as
emotion elicitor. Kansei Eng. Int. 8(2): 147–52.
Takahashi, K., Mitsuhashi, H., Murata, K., Norieda, S., and Watanabe, K. 2011.
Improving shared experiences by haptic telecommunication. In 2011 International Conference on Biometrics and Kansei Engineering (ICBAKE), 210–215. Los
Alamitos, CA: IEEE. doi:10.1109/ICBAKE.2011.19
Tanaka, F., Cicourel, A., and Movellan, J.R. 2007. Socialization between toddlers
and robots at an early childhood education center. Proc. Natl. Acad. Sci. U.S.A.
104: 17954–8. doi:10.1073/pnas.0707769104
Teh, J.K.S., Cheok, A.D., Peiris, R.L., Choi, Y., Thuong, V., and Lai, S. 2008. Huggy
pajama: a mobile parent and child hugging communication system. In Proceedings of the 7th International Conference on Interaction Design and Children IDC
‘08, 250–257. New York, NY: ACM.
Thompson, E.H., and Hampton, J.A. 2011. The effect of relationship status on
communicating emotions through touch. Cognit. Emot. 25: 295–306. doi:10.
1080/02699931.2010.492957
Toet, A., van Erp, J.B.F., Petrignani, F.F., Dufrasnes, M.H., Sadhashivan, A., van
Alphen, D., et al. 2013. Reach out and touch somebody’s virtual hand. Affectively
connected through mediated touch. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 786–791.
Piscataway, NJ: IEEE Computer Society.
Tsalamlal, M.Y., Ouarti, N., Martin, J.C., and Ammi, M. 2014. Haptic
communication of dimensions of emotions using air jet based tactile
stimulation. J. Multimodal User Interfaces 9(1): 69–77. doi:10.1007/s12193-014
-0162-3
Tsetserukou, D. 2010. HaptiHug: a novel haptic display for communication of hug
over a distance. In Haptics: Generating and Perceiving Tangible Sensations, Edited
by A.M. Kappers, J.B. Erp, W.M. Bergmann Tiest, and F.C. Helm, 340–347.
Berlin: Springer.
Tsetserukou, D., and Neviarouskaya, A. 2010. Innovative real-time communication
system with rich emotional and haptic channels. In Haptics: Generating and
Frontiers in Digital Humanities | www.frontiersin.org
Perceiving Tangible Sensations, Edited by A.M. Kappers, J.B. van Erp, W.M.
Bergmann Tiest, and F.C. Helm, 306–313. Berlin: Springer.
Tsihrintzis, G.A., Virvou, M., Alepis, E., and Stathopoulou, I.O. 2008. Towards
improving visual-facial emotion recognition through use of complementary
keyboard-stroke pattern information. In Fifth International Conference on
Information Technology: New Generations (ITNG 2008), 32–37. Piscataway,
NJ: IEEE.
Uvnäs-Moberg, K. 1997. Physiological and endocrine effects of social contact. Ann.
N. Y. Acad. Sci. 807: 146–63. doi:10.1111/j.1749-6632.1997.tb51917.x
Vallbo, A., Olausson, H., Wessberg, J., and Norrsell, U. 1993. A system of unmyelinated afferents for innocuous mechanoreception in the human skin. Brain Res.
628: 301–4. doi:10.1016/0006-8993(93)90968-S
Van Bel, D.T., IJsselsteijn, W.A., and de Kort, Y.A.W. 2008. Interpersonal connectedness: conceptualization and directions for a measurement instrument. In
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
(CHI’08), 3129–3134. New York, NY: ACM.
van Bel, D.T., Smolders, K.C.H.J., IJsselsteijn, W.A., and de Kort, Y.A.W. 2009. Social
connectedness: concept and measurement. In Proceedings of the 5th International
Conference on Intelligent Environments, Edited by V. Callaghan, A. Kameas, A.
Reyes, D. Royo, and M. Weber, 67–74. Amsterdam: IOS Press.
van Erp, J.B.F. 2007. Tactile Displays for Navigation and Orientation: Perception and
Behaviour. Utrecht: Utrecht University.
van Erp, J.B.F. 2012. The ten rules of touch: guidelines for social agents and robots
that can touch. In Proceedings of the 25th Annual Conference on Computer
Animation and Social Agents (CASA 2012), Singapore: Nanayang Technological
University.
van Erp, J.B.F., Kyung, K.-U., Kassner, S., Carter, J., Brewster, S., Weber, G., et al.
2010. Setting the standards for haptic and tactile interactions: ISO’s work. In
Haptics: Generating and Perceiving Tangible Sensations. Proceedings of Eurohaptics 2010, Edited by A.M.L. Kappers, J.B.F. van Erp, W.M. Bergmann Tiest, and
F.C.T. van der Helm, 353–358. Heidelberg: Springer.
van Erp, J.B.F., and Toet, A. 2013. How to touch humans. Guidelines for social agents
and robots that can touch. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 780–785. Geneva:IEEE
Computer Society. doi:10.1109/ACII.2013.77145
van Erp, J.B.F., and van Veen, H.A.H.C. 2004. Vibrotactile in-vehicle navigation
system. Transp. Res. Part F Traffic Psychol. Behav. 7: 247–56. doi:10.1016/j.trf.
2004.09.003
van Wingerden, S., Uebbing, T.J., Jung, M.M., and Poel, M. 2014. A neural network
based approach to social touch classification. In Proceedings of the 2014 Workshop on Emotion Representation and Modelling in Human-Computer-InteractionSystems (ERM4HCI ‘14), 7–12. New York, NY: ACM.
Vinciarelli, A., Pantic, M., Bourlard, H., and Pentland, A. 2008. Social signal processing: state-of-the-art and future perspectives of an emerging domain. In Proceedings of the 16th ACM International Conference on Multimedia, 1061–1070.
New York, NY: ACM.
Vrontou, S., Wong, A.M., Rau, K.K., Koerber, H.R., and Anderson, D.J. 2013.
Genetic identification of C fibres that detect massage-like stroking of hairy skin
in vivo. Nature 493: 669–73. doi:10.1038/nature11810
Wada, K., Ikeda, Y., Inoue, K., and Uehara, R. 2010. Development and preliminary
evaluation of a caregiver’s manual for robot therapy using the therapeutic seal
robot Paro. In Proceedings of the IEEE International Workshop on Robot and
Human Interactive Communication (RO-MAN 2010), 533–538. Piscataway, NJ:
IEEE.
Wada, K., and Shibata, T. 2007. Living with seal robots – its sociopsychological
and physiological influences on the elderly at a care house. IEEE Trans. Rob. 23:
972–80. doi:10.1109/TRO.2007.906261
Walker, S.C., and McGlone, F.P. 2015. Perceived pleasantness of social touch reflects
the anatomical distribution and velocity tuning of C-tactile afferents: an affective
homunculus. In Program No. 339.14/HH22. 2014 Neuroscience Meeting Planner,
Washington, DC: Society for Neuroscience.
Wang, R., and Quek, F. 2010. Touch & talk: contextualizing remote touch for affective interaction. In Proceedings of the 4th International Conference on Tangible,
Embedded, and Embodied Interaction (TEI ‘10), 13–20. New York, NY: ACM.
Wang, R., Quek, F., Tatar, D., Teh, K.S., and Cheok, A.D. 2012. Keep in touch:
channel, expectation and experience. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems CHI ‘12, 139–148. New York, NY: ACM.
13
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Whitcher, S.J., and Fisher, J.D. 1979. Multidimensional reaction to therapeutic touch
in a hospital setting. J. Pers. Soc. Psychol. 37: 87–96. doi:10.1037/0022-3514.37.
1.87
Wigram, A.L. 1996. The Effects of Vibroacoustic Therapy on Clinical and NonClinical Populations. Ph.D. thesis, St. George’s Hospital Medical School, London
University, London.
Yohanan, S., Chan, M., Hopkins, J., Sun, H., and MacLean, K. 2005. Hapticat: exploration of affective touch. In Proceedings of the 7th International
Conference on Multimodal Interfaces (ICMI ‘05), 222–229. New York, NY:
ACM.
Yohanan, S., and MacLean, K. 2012. The role of affective touch in human-robot
interaction: human intent and expectations in touching the haptic creature. Int.
J. Soc. Rob. 4: 163–80. doi:10.1007/s12369-011-0126-7
Frontiers in Digital Humanities | www.frontiersin.org
Zacharatos, H., Gatzoulis, C., and Chrysanthou, Y.L. 2014. Automatic emotion
recognition based on body movement analysis: a survey. IEEE CGA 34: 35–45.
doi:10.1109/MCG.2014.106
Conflict of Interest Statement: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be
construed as a potential conflict of interest.
Copyright © 2015 van Erp and Toet. This is an open-access article distributed under
the terms of the Creative Commons Attribution License (CC BY). The use, distribution
or reproduction in other forums is permitted, provided the original author(s) or
licensor are credited and that the original publication in this journal is cited, in
accordance with accepted academic practice. No use, distribution or reproduction is
permitted which does not comply with these terms.
14
May 2015 | Volume 2 | Article 2
| Use only the document provided.
If the question can not be answered then respond with 'I am unable to answer this request'
What are some ICT advances in the field of social touch?
REVIEW
published: 27 May 2015
doi: 10.3389/fdigh.2015.00002
Social touch in human–computer
interaction
Jan B. F. van Erp 1,2 * and Alexander Toet 1
1
Perceptual and Cognitive Systems, TNO, Soesterberg, Netherlands, 2 Human Media Interaction, University of Twente,
Enschede, Netherlands
Edited by:
Yoram Chisik,
University of Madeira, Portugal
Reviewed by:
Mohamed Chetouani,
Université Pierre et Marie Curie,
France
Gualtiero Volpe,
Università degli Studi di Genova, Italy
Hongying Meng,
Brunel University London, UK
*Correspondence:
Jan B. F. van Erp,
TNO Human Factors, Kampweg 5,
Soesterberg 3769DE, Netherlands
[email protected]
Specialty section:
This article was submitted to
Human-Media Interaction, a section
of the journal Frontiers in Digital
Humanities
Received: 06 February 2015
Paper pending published:
19 March 2015
Accepted: 08 May 2015
Published: 27 May 2015
Citation:
van Erp JBF and Toet A (2015) Social
touch in human–computer interaction.
Front. Digit. Humanit. 2:2.
doi: 10.3389/fdigh.2015.00002
Touch is our primary non-verbal communication channel for conveying intimate emotions
and as such essential for our physical and emotional wellbeing. In our digital age, human
social interaction is often mediated. However, even though there is increasing evidence
that mediated touch affords affective communication, current communication systems
(such as videoconferencing) still do not support communication through the sense of
touch. As a result, mediated communication does not provide the intense affective
experience of co-located communication. The need for ICT mediated or generated
touch as an intuitive way of social communication is even further emphasized by the
growing interest in the use of touch-enabled agents and robots for healthcare, teaching,
and telepresence applications. Here, we review the important role of social touch in
our daily life and the available evidence that affective touch can be mediated reliably
between humans and between humans and digital agents. We base our observations
on evidence from psychology, computer science, sociology, and neuroscience with
focus on the first two. Our review shows that mediated affective touch can modulate
physiological responses, increase trust and affection, help to establish bonds between
humans and avatars or robots, and initiate pro-social behavior. We argue that ICT
mediated or generated social touch can (a) intensify the perceived social presence of
remote communication partners and (b) enable computer systems to more effectively
convey affective information. However, this research field on the crossroads of ICT and
psychology is still embryonic and we identify several topics that can help to mature the
field in the following areas: establishing an overarching theoretical framework, employing
better research methodologies, developing basic social touch building blocks, and solving
specific ICT challenges.
Keywords: affective touch, mediated touch, social touch, interpersonal touch, human–computer interaction,
human–robot interaction, haptic, tactile
Introduction
Affective Touch in Interpersonal Communication
The sense of touch is the earliest sense to develop in a human embryo (Gottlieb 1971)
and is critical for mammals’ early social development and to grow up healthily (Harlow and
Zimmermann 1959; Montagu 1972). The sense of touch is one of the first mediums of communication between newborns and parents. Interpersonal communication is to a large extent
non-verbal and one of the primary purposes of non-verbal behavior is to communicate emotional states. Non-verbal communication includes facial expressions, prosody, gesture, and touch
Frontiers in Digital Humanities | www.frontiersin.org
1
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
(Argyle 1975; Knapp and Hall 2010) of which touch is the primary
modality for conveying intimate emotions (Field 2010; Morrison
et al. 2010; App et al. 2011), for instance, in greetings, in corrections, and in (sexual) relationships. As touch implies direct
physical interaction and co-location, it inherently has the potential
to elicit feelings of social presence. The importance of touch as a
modality in social communication is highlighted by the fact that
the human skin has specific receptors to process affective touch
(“the skin as a social organ”: Morrison et al. 2010) in addition
to those for discriminative touch (Löken et al. 2009; Morrison
et al. 2011; Gordon et al. 2013; McGlone et al. 2014), presumably
like all mammals (Vrontou et al. 2013). ICT systems can employ
human touch for information processing (discriminative touch)
and communication (social touch) as well.
Field (2010) and Gallace and Spence (2010)]. For these reasons,
mediated interpersonal touch is our first topic of interest.
Human–computer interaction applications increasingly deploy
intelligent agents to support the social aspects of the interaction.
Social agents (either embodied or virtual) already employ vision
and audition to communicate social signals but generally lack
touch capabilities. If we look at applications in robots and avatars,
the first applications including touch facilitated information from
user to system only, e.g., in the form of a touch screen or through
specific touch sensors in a tangible interface. Social agents that can
touch the user are of much more recent date. We believe that social
agents could benefit from generating and perceiving social touch
cues (van Erp 2012). Based on studies reviewed in this paper, we
expect that people will feel a closer bond with agents or robots
that use and respond to affective touch since they appear more
human than machine-like and more trustworthy. Touch-enabled
social agents are therefore our second topic of interest.
Discriminative Touch in ICT Systems
Conventional systems for human–computer interaction only
occasionally employ the sense of touch and mainly provide information through vision and audition. One of the first large-scale
applications of a tactile display was the vibration function on
mobile phones, communicating the 1-bit message of an incoming
call, and the number of systems that include the sense of touch
has steadily increased over the past two decades. An important
reason for the sparse use of touch is the supposed low bandwidth of the touch channel (Gallace et al. 2012). Although often
underestimated, our touch sense is very well able to process large
amounts of abstract information. For instance, blind people who
are trained in Braille reading can actually read with their fingertips. This information processing capability is increasingly applied
in our interaction with systems, and more complex information
is being displayed, e.g., to reduce the risk of visual and auditory
overload in car driving, to make us feel more immersed in virtual
environments, or to realistically train and execute certain medical
skills (van Erp and van Veen 2004; Self et al. 2008).
Touch in Social Communication
Social touch can take many forms in our daily lifes such as greetings (shaking hands, embracing, kissing, backslapping, and cheektweaking), in intimate communication (holding hands, cuddling,
stroking, back scratching, massaging), and in corrections (punishment, spank on the bottom). Effects of social touch are apparent
at many levels ranging from physiology to social behavior as we
will discuss in the following sections.
Social touches can elicit a range of strong experiences between
pleasant and unpleasant, depending on among others the stimulus
[e.g., unpleasant pinches evoking pain (nociception)] and location
on the body (e.g., pleasant strokes in erogenous zones). In addition to touch in communication, touch can also be employed in
psychotherapy (Phelan 2009) and nursing (Gleeson and Timmins
2005). Examples range from basic comforting touches and massaging to alternative therapies such as acu-pressure, Reiki, vibroacoustic therapy, and low-frequency vibration (Wigram 1996;
Kvam 1997; Patrick 1999; Puhan et al. 2006; Prisby et al. 2008).
See Dijk et al. (2013) for more examples on mental, healthrelated, and bodily effects of touch. In this paper, we focus on
ICT mediated and generated social touch (the areas where psychology and computer science meet), meaning that areas of, for
instance, Reiki and low-frequency vibration fall outside the scope
of this paper. We first discuss the many roles of social touch in
our daily life before continuing with ICT mediated inter-human
touch and ICT generated and interpreted touch in human–agent
interaction.
In 1990s (Vallbo et al. 1993), the first reports on so-called C
tactile afferents in human hairy skin were published. This neurophysiological channel in the skin reacts to soft, stroking touches,
and its activity strongly depends on stroking speed (with an optimum in the speed range 3–10 cm/s) and has a high correlation
with subjective ratings of the pleasantness of the touch. Research
over the past decades has shown that this system is not involved
in discriminative touch (Olausson et al. 2008) but underlies the
emotional aspects of touch and the development and function of
the social brain (McGlone et al. 2014). Social touches may activate
both this pleasurable touch system and the discriminative touch
Affective Touch in ICT Systems
Incorporating the sense of touch in ICT systems started with
discriminative touch as an information channel, often in addition to vision and audition (touch for information processing).
We believe that we are on the averge of a second transition:
adding social or affective touch to ICT systems (touch for social
communication). In our digital era, an increasing amount of our
social interactions is mediated, for example, through (cell) phones,
video conferencing, text messaging, chat, or e-mail. Substituting
direct contact, these modern technologies make it easy to stay
in contact with distant friends and relatives, and they afford
some degree of affective communication. For instance, an audio
channel can transmit affective information through phonetic features like amplitude variation, pitch inflections, tempo, duration,
filtration, tonality, or rhythm, while a video channel supports nonverbal information such as facial expressions and body gestures.
However, current communication devices do not allow people to
express their emotions through touch and may therefore lack a
convincing experience of actual togetherness (social presence).
This technology-induced touch deprivation may even degrade
the potential beneficial effects of mediated social interaction [for
reviews of the negative side effects of touch deprivation see
Frontiers in Digital Humanities | www.frontiersin.org
2
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
system (reacting to, for instance, pressure, vibration, and skin
stretch).
caring, agreement, gratitude, and moral support. Cold feedback
was consistently associated with negative issues.
Touch, Physiological Functioning, and Wellbeing
Touch to Communicate Emotions
Hertenstein et al. (2006, 2009) showed that touch alone can effectively be used to convey distinct emotions such as anger, fear,
and disgust. In addition, touch plays a role in communicating
more complex social messages like trust, receptivity, affection
(Mehrabian 1972; Burgoon 1991) and nurture, dependence, and
affiliation (Argyle 1975). Touch can also enhance the meaning
of other forms of verbal and non-verbal communication, e.g.,
touch amplifies the intensity of emotional displays from our face
and voice (Knapp and Hall 2010). Examples of touches used to
communicate emotions are shaking, pushing, and squeezing to
communicate anger, hugging, patting, and stroking to communicate love (Gallace and Spence 2010). Jones and Yarbrough (1985)
stated that a handshake, an encouraging pat on the back, a sensual
caress, a nudge for attention, a tender kiss, or a gentle brush of the
shoulder can all convey a vitality and immediacy that is at times
far more powerful than language. According to App et al. (2011),
touch is the preferred non-verbal communication channel for
conveying intimate emotions like love and sympathy, confirmed
by, for instance, Debrot et al. (2013) who showed that responsive
touch between romantic partners enhances their affective state.
McCance and Otley (1951) showed that licking and stroking of the
mother animal is critical to start certain physiological processes in
a new-born mammal. This indicates the direct link between skin
stimulation and physiological processes, a link that is preserved
later in life. For instance, gentle stroking touch can lower heart
rate and blood pressure (Grewen et al. 2003), increase transient
sympathetic reflexes and increase pain thresholds (Drescher et al.
1980; Uvnäs-Moberg 1997), and affect the secretion of stress
hormones (Whitcher and Fisher 1979; Shermer 2004; Ditzen et al.
2007). Women holding their partner’s hand showed attenuated
threat-related brain activity in response to mild electric shocks
(Coan et al. 2006) and reported less pain in a cold pressor task
(Master et al. 2009). Touch can also result in coupling or syncing of
electrodermal activity of interacting (romantic) couples (ChatelGoldman et al. 2014). Interpersonal touch is the most commonly
used method of comforting (Dolin and Booth-Butterfield 1993)
and an instrument in nursing care (Bush 2001, Chang 2001,
Henricson et al. 2008). For example, patients who were touched
by a nurse during preoperative instructions experienced lower
subjective and objective stress levels (Whitcher and Fisher 1979),
than people who were not.
In addition to touch affecting hormone levels, hormones
(i.e., oxytocin) also affect the perception of interpersonal touch.
Scheele et al. (2014) investigated the effect of oxytocin on the
perception of a presumed male or female touch on male participants and found that oxytocin increased the rated pleasantness
and brain activity of presumed female touches but not of male
touches (all touches were delivered by the same female experimenter). Ellingsen et al. (2014) reported that after oxytocin submission, the effect of touch on the evaluation of facial expression
increased. In addition, touch (handshaking in particular) can also
play a role in social chemo-signaling. Handshaking can lead to
the exchange of chemicals in sweat and behavioral data indicates
that people more often sniff their hands after a greeting with
a handshake than without a handshake (Frumin et al. 2015).
Many social touches are reciprocal in nature (like cuddling and
holding hands) and their dynamics rely on different mechanisms
all having their own time scale: milliseconds for the detection of
a touch (discriminative touch), hundreds of milliseconds and up
for the experience of pleasurable touch, and seconds and up for
physiological responses (including changes in hormone levels).
How these processes interact and possibly reinforce each other is
still terra incognita.
Physiological responses can also be indirect, i.e., the result of
social or empathetic mechanisms. Cooper et al. (2014) recently
showed that the body temperature of people decreased when
looking at a video of other people putting their hands in cold
water. Another recent paradigm is to use thermal and haptically
enhanced interpersonal speech communication. This showed that
warm and cold signals were used to communicate the valence
of messages (IJzerman and Semin 2009; Suhonen et al. 2012a).
Warm messages were used to emphasize positive feelings and
pleasant experiences, and to express empathy, comfort, closeness,
Frontiers in Digital Humanities | www.frontiersin.org
Touch to Elicit Emotions
Not only can the sense of touch be used to communicate distinct
emotions but also to elicit (Suk et al. 2009) and modulate human
emotion. Please note that interpreting communicated emotions
differs from eliciting emotions as the former may be considered
as a cognitive task not resulting in physiological responses, e.g.,
one can perceive a touch as communicating anger without feeling
angry. Starting with the James–Lange theory (James 1884; Cannon
1927; Damasio 1999), the conscious experience of emotion is
the brain’s interpretation of physiological states. The existence of
specific neurophysiological channels for affective touch and pain
and the direct physiological reactions to touch indicate that there
may be a direct link between tactile stimulation, physiological
responses, and emotional experiences. Together with the distinct
somatotopic mapping between bodily tactile sensations and different emotional feelings as found by Nummenmaa et al. (2013),
one may assume that tactile stimulation of different bodily regions
can elicit a wide range of emotions.
Touch as a Behavior Modulator
In addition to communicating and eliciting emotions, touch provides an effective means of influencing people’s attitudes toward
persons, places, or services, their tendency to create bonds and
their (pro-)social behaviors [see Gallace and Spence (2010) for
an excellent overview]. This effect is referred to as the Midas
touch: a brief, casual touch (often at the hand or arm) that is
not necessarily consciously perceived named after king Midas
from Greek mythology who had the ability to turn everything he
touched into gold. For example, a half-second of hand-to-hand
touch from a librarian fostered more favorable impressions of the
library (Fisher et al. 1976), touching by a salesperson increased
positive evaluations of the store (Hornik 1992), and touch can
3
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
also boost the attractiveness ratings of the toucher (Burgoon et al.
1992). Recipients of such “simple” Midas touches are also more
likely to be more compliant or unselfish: willing to participate in
a survey (Guéguen 2002) or to adhere to medication (Guéguen
et al. 2010), volunteering for demonstrating in a course (Guéguen
2004), returning money left in a public phone (Kleinke 1977),
spending more money in a shop (Hornik 1992), tipping more in
a restaurant (Crusco and Wetzel 1984), helping with picking-up
dropped items (Guéguen and Fischer-Lokou 2003), or giving away
a cigarette (Joule and Guéguen 2007). In addition to these oneon-one examples, touch also plays a role in teams. For instance,
physical touch enhances team performance of basketball players
through building cooperation (Kraus et al. 2010). In clinical and
professional situations, interpersonal touch can increase information flow and causes people to evaluate communication partners
more favorably (Fisher et al. 1976).
and Watts 2010; Tsetserukou 2010), pokes (Park et al. 2011),
handholding (Gooch and Watts 2012; Toet et al. 2013), handshakes (Bailenson et al. 2007), strokes on the hand (Eichhorn et al.
2008), arm (Huisman et al. 2013) and cheek (Park et al. 2012),
pinches, tickles (Furukawa et al. 2012), pats (Bonanni et al. 2006),
squeezes (Rantala et al. 2013), thermal signals (Gooch and Watts
2010; Suhonen et al. 2012a,b), massages (Chung et al. 2009), and
intimate sexual touches (Solon 2015).
In addition to direct mediation, there is also an option to use
indirect ways, for instance, through avatars in a virtual world.
Devices like a haptic-jacket system can enhance the communication between users of virtual worlds such as Second Life by
enabling the exchange of touch cues resembling encouraging pats
and comforting hugs between users and their respective avatars
(Hossain et al. 2011). The Huggable is a semi-autonomous robotic
teddy bear equipped with somatic sensors, intended to facilitate
affective haptic communication between two people (Lee et al.
2009) through a tangible rather than a virtual interface. Using
these systems, people can not only exchange messages but also
emotionally and physically feel the social presence of the communication partner (Tsetserukou and Neviarouskaya 2010).
The above examples can be considered demonstrations of the
potential devices and applications and the richness of social touch.
Although it appears that virtual interfaces can effectively transmit
emotion even with touch cues that are extremely degraded (e.g., a
handshake that is lacking grip, temperature, dryness, and texture:
Bailenson et al. 2007), the field lacks rigorous validation and systematic exploration of the critical parameters. The few exceptions
are the work by Smith and MacLean (2007) and by Salminen
et al. (2008). Smith and MacLean performed an extensive study
into the possibilities and the design space of an interpersonal
haptic link and concluded that emotion can indeed be communicated through this medium. Salminen et al. (2008) developed a
friction-based horizontally rotating fingertip stimulator to investigate emotional experiences and behavioral responses to haptic
stimulation and showed that people can rate these kind of stimuli
as less or more unpleasant, arousing, avoidable, and dominating.
Mediated Social Touch
In the previous section, we showed that people communicate
emotions through touch, and that inter-human touch can enhance
wellbeing and modulate behavior. In interpersonal communication, we may use touch more frequently than we are aware of.
Currently, interpersonal communication is often mediated and
given the inherent human need for affective communication,
mediated social interaction should preferably afford the same
affective characteristics as face-to-face communication. However,
despite the social richness of touch and its vital role in human
social interaction, existing communication media still rely on
vision and audition and do not support haptic interaction. For
a more in-depth reflection on the general effects of mediated
interpersonal communication, we refer to Konijn et al. (2008) and
Ledbetter (2014).
Tactile or kinesthetic interfaces in principle enable haptic communication between people who are physically apart, and may
thus provide mediated social touch, with all the physical, emotional, and intellectual feedback it supplies (Cranny-Francis 2011).
Recent experiments show that even simple forms of mediated
touch have the ability to elicit a wide range of distinct affective
feelings (Tsalamlal et al. 2014). This finding has stimulated the
study and design of devices and systems that can communicate,
elicit, enhance, or influence the emotional state of a human by
means of mediated touch.
Remote Collaboration Between Groups
Collaborative virtual environments are increasingly used for distance education [e.g., Mikropoulos and Natsis (2011)], training
simulations [e.g., Dev et al. (2007) and Flowers and Aggarwal
(2014)], therapy treatments (Bohil et al. 2011), and for social
interaction venues (McCall and Blascovich 2009). It has been
shown that adding haptic feedback to the interaction between
users of these environments significantly increases their perceived
social presence (Basdogan et al. 2000; Sallnäs 2010).
Another recent development is telepresence robots that enable
users to physically interact with geographically remote persons
and environments. Their ultimate goal is to provide users with
the illusion of a physical presence in remote places. Telepresence
robots combine physical and remote presence and have a wide
range of potential social applications like remote embodied teleconferencing and teaching, visiting or monitoring elderly in care
centers, and making patient rounds in medical facilities (Kristoffersson et al. 2013). To achieve an illusion of telepresence, the
robot should be able to reciprocate the user’s behavior and to
Remote Communication Between Partners
Intimacy is of central importance in creating and maintaining
strong emotional bonds. Humans have an important social and
personal need to feel connected in order to maintain their interpersonal relationships (Kjeldskov et al. 2004). A large part of their
interpersonal communication is emotional rather than factual
(Kjeldskov et al. 2004).
The vibration function on a mobile phone has been used to
render emotional information for blind users (Réhman and Liu
2010) and a similar interface can convey emotional content in
instant messaging (Shin et al. 2007). Also, a wide range of systems
have been developed for the mediated representation of specific
touch events between dyads such as kisses (Saadatian et al. 2014),
hugs (Mueller et al. 2005; Cha et al. 2008; Teh et al. 2008; Gooch
Frontiers in Digital Humanities | www.frontiersin.org
4
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
provide the user with real-time multisensory feedback. As far as
we are aware of, systems including the sense of touch have not been
described yet.
provided by the user to the system, and closing the loop between
these signals.
Generating Social Touch Signals
Lemmens et al. (2009) tested tactile jackets (and later blankets)
to increase emotional experiences while watching movies and
reported quite strong effects of well-designed vibration patterns.
Dijk et al. (2013) developed a dance vest for deaf teenagers. This
vest included an algorithm that translated music into vibration
patterns presented through the vest. Although not generated by a
social entity, experiencing music has a substantial emotional part
as did the automatically generated vibration patterns.
Beyond the scripted and one-way social touch cues employed
in the examples above, human–computer interaction applications
increasingly deploy intelligent agents to support the social aspects
of the interaction (Nijholt 2014). Social agents are used to communicate, express, and perceive emotions, maintain social relationships, interpret natural cues, and develop social competencies
(Fong et al. 2003; Li et al. 2011). Empathic communication in
general may serve to establish and improve affective relations with
social agents (Bickmore and Picard 2005), and may be considered
as a fundamental requirement for social agents that are designed to
function as social companions and therapists (Breazeal 2011). Initial studies have shown that human interaction with social robots
can indeed have therapeutic value (Kanamori et al. 2003; Wada
and Shibata 2007; Robinson et al. 2013). These agents typically use
facial expressions, gesture, and speech to convey affective cues to
the user. Social agents (either physically embodied as, e.g., robots
or represented as on-screen virtual agents) may also use (mediated) touch technology to communicate with humans (Huisman
et al. 2014a). In this case, the touch cue is not only mediated but
also generated and interpreted by an electronic system instead of
a human.
The physical embodiment of robots gives them a direct capability to touch users, while avatars may use the technology designed
for other HCI or mediated social touch applications to virtually
touch their user. Several devices have been proposed that enable
haptic interaction with virtual characters (Hossain et al. 2011;
Rahman and El Saddik 2011; Huisman et al. 2014a). Only few
studies investigated autonomous systems that touch users for
affective or therapeutic purposes (Chen et al. 2011), or that use
touch to communicate the affective state of artificial creatures to
their users (Yohanan and MacLean 2012).
Reactions to Mediated Touch at a Physiological,
Behavioral, and Social Level
Although the field generally lacks serious validation studies, there
is mounting evidence that people use, experience, and react to
direct and mediated social touch in similar ways Bailenson and Yee
(2007), at the physiological, psychological, behavioral, and social
level.
At a physiological and psychological level, mediated affective
touch on the forearm can reduce heart rate of participants that
experienced a sad event (Cabibihan et al. 2012). Mediated touch
affects the quality of a shared experience and increases the intimacy felt toward the other person (Takahashi et al. 2011). Stimulation of someone’s hand through mediated touch can modulate
the quality of a remotely shared experience (e.g., the hilariousness
of a movie) and increase sympathy for the communication partner
(Takahashi et al. 2011). In a storytelling paradigm, participants
experienced a significantly higher degree of connectedness with
the storyteller when the speech was accompanied by remotely
administered squeezes in the upper arm (Wang et al. 2012).
Additional evidence for the potential effects of mediated touch
are found in the fact that hugging a robot medium while talking
increases affective feelings and attraction toward a conversation
partner (Kuwamura et al. 2013; Nakanishi et al. 2013). Participants receiving tactile facial stimulation experienced a stranger
receiving similar stimulation to be closer, more positive and more
similar to themselves when they were provided with synchronous
visual feedback (Paladino et al. 2010).
At a behavioral level, the most important observation is that
the effect of a mediated touch on people’s pro-social behavior is
similar to that of a real touch. According to Haans and IJsselsteijn
(2009a), a virtual Midas touch has effects in the same order of
magnitude as a real Midas touch. At the social level, the use
of mediated touch is only considered appropriate as a means of
communication between people in close personal relationships
(Rantala et al. 2013), and the mere fact that two people are willing
to touch implies an element of trust and mutual understanding
(Collier 1985). The interpretation of mediated touch depends on
the type of interrelationship between sender and receiver (Rantala
et al. 2013), similar to direct touch (Coan et al. 2006; Thompson
and Hampton 2011) and like direct touch, mediated touch communication between strangers can cause discomfort (Smith and
MacLean 2007).
Recognizing and Interpreting Social Touch
Signals
Communication implies a two-way interaction and social robots
and avatars should therefore not only be able to generate but
also to recognize affectionate touches. For instance, robotic affective responses to touch may contribute to people’s quality of life
(Cooney et al. 2014). Touch capability is not only “nice to have”
but may even be a necessity: people expect social interaction with
embodied social agents to the extent that physical embodiment
without tactile interaction results in a negative appraisal of the
robot (Lee et al. 2006). In a recent study on the suitability of social
robots for the wellbeing of the elderly, all participants expressed
their wish for the robot to feel pleasant to hold or stroke and to
Social Touch Generated by ICT Systems
The previous chapter dealt with devices that enable interpersonal
social touch communication, i.e., a situation in which the touch
signals are generated and interpreted by human users and only
mediated through information and communication technology.
One step beyond this is to include social touch in the communication between a user and a virtual entity. This implies
three additional challenges: the generation of social touch signals
from system to user, the interpretation of social touch signals
Frontiers in Digital Humanities | www.frontiersin.org
5
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
respond to touch (Hutson et al. 2011). The well-known example of
the pet seal Paro (Wada et al. 2010) shows how powerful a simple
device can be in evoking social touches. Paro responds sec to being
touched but does neither interpret social touch nor produce touch.
Similar effects are reported for touching a humanoid robot on the
shoulder: just being able to touch already significantly increases
trust toward the robot (Dougherty and Scharfe 2011).
Automatic recognition and interpretation of the affective content of human originated social touch is essential to support this
interaction (Argall and Billard 2010). Different approaches to
equipping robots with a sense of touch include covering them
with an artificial skin that simulates the human somatosensory
systems (Dahiya et al. 2010) or the use of fully embodied robots
covered with a range of different (e.g., temperature, proximity,
pressure) sensors (Stiehl et al. 2005). To fully capture a social
touch requires sensors that go beyond those used in the more
advanced area of haptics and that primarily involve discriminative
touch (e.g., contact, pressure, resistance). At least sensors for temperature and soft, stroking touch should be included to capture
important parameters of social touch. However, just equipping
a system (robot, avatar, or interface) with touch sensors is not
sufficient to enable affective haptic interaction. A system can
only appreciate and respond to affective touch in a natural way
when it is able (a) to determine where the touch was applied,
(b) to assess what kind of tactile stimulation was applied, and
(c) to appraise the affective quality of the touch (Nguyen et al.
2007). While video- and audio-based affect recognition have been
widely investigated (Calvo and D’Mello 2010), there have only
been a few studies on touch-based affect recognition. The results
of these preliminary studies indicate that affect recognition based
on tactile interaction between humans and robots is comparable
to that between humans (Naya et al. 1999; Cooney et al. 2012;
Altun and MacLean 2014; Jung et al. 2014; van Wingerden et al.
2014).
Research on capturing emotions from touch input to a computer system (i.e., not in a social context) confirms the potential
of the touch modality (Zacharatos et al. 2014). Several research
groups worked on capturing emotions from traditional computer
input devices like mouse and keyboard based on the assumption
that a user’s emotional state affects the motor output system. A
general finding is that typing speed correlates to valence with
a decrease in typing speed for negative valence and increased
speed for positive valence compared to typing speed in neutral
emotional state (Tsihrintzis et al. 2008; Khanna and Sasikumar
2010). A more informative system includes the force pattern of
the key strokes. Using this information, very high-accuracy rates
(>90%) are reported (Lv et al. 2008) for categorizing six emotional
states (neutral, anger, fear, happiness, sadness, and surprise).
This technique requires force sensitive keyboards, which are not
widely available. Touch screens are used by an increasing number of people and offer much richer interaction parameters than
keystrokes such as scrolling, tapping, or stroking. Recent work by
Gao et al. (2012) showed that in a particular game played on the
iPod, touch inputs like stroke length, pressure, and speed were
important features related to a participant’s verbal description of
the emotional experience during the game. Using a linear SVM,
classification performance reached 77% for four emotional classes
Frontiers in Digital Humanities | www.frontiersin.org
(excited, relaxed, frustrated, and bored), close to 90% for two levels
of arousal, and close to 85% for two levels of valence.
Closing the Loop
A robot that has the ability to “feel,” “understand,” and “respond”
to touch in a human-like way will be capable of more intuitive and
meaningful interaction with humans. Currently, artificial entities
that include touch capabilities either produce or interpret social
touch, but not both. However, both are required to close the loop
and come to real, bidirectional interaction. The latter may require
strict adherence to, for instance, timing and immediacy; a handshake in which the partners are out-of-phase can be very awkward.
And as Cranny-Francis (2011) states, violating the tactile regime
may result in being rejected as alien and may seriously offend
others.
Reactions to Touching Robots and Avatars at a
Physiological, Behavioral, and Social Level
Although there are still very few studies in this field, and there has
been hardly any real formal evaluation, the first results of touch
interactions with artificial entities appear promising. For instance,
people experience robots that interact by touch as less machinelike (Cramer et al. 2009). Yohanan and colleagues (Yohanan et al.
2005; Yohanan and MacLean 2012) designed several haptic creatures to study a robot’s communication of emotional state and
concluded that participants experienced a broader range of affect
when haptic renderings were applied. Basori et al. (2009) showed
the feasibility of using vibration in combination with sound and
facial expression in avatars to communicate emotion strength.
Touch also assists in building a relationship with social actors:
hand squeezes (delivered through an airbladder) can improve the
relation with a virtual agent (Bickmore et al. 2010). Artificial
hands equipped with synthetic skins can potentially replicate not
only the biomechanical behavior but also the warmth (the “feel”)
of the human hand (Cabibihan et al. 2009, 2010, 2011). Users
perceived a higher degree of friendship and social presence when
interacting with a zoomorphic social robot with a warmer skin
(Park and Lee 2014). Recent experiments indicate that the warmth
of a robotic hand mediating social touch contributed significantly
to the feeling of social presence (Nakanishi et al. 2014) and holding
a warm robot hand increased feelings of friendship and trust
toward a robot (Nie et al. 2012).
Kotranza and colleagues (Kotranza and Lok 2008; Kotranza
et al. 2009) describe a virtual patient as a medical student’s training
tool that is able to be touched and to touch back. These touchenabled virtual patients were treated more like real humans than
virtual patients without touch capabilities (students expressed
more empathy and used touch more frequently to comfort and
reassure the virtual patient).The authors concluded that by adding
haptic interaction to the virtual patient, the bandwidth of the
student-virtual patient communication increases and approaches
that of human–human communication. In a study on the interaction between toddlers and a small humanoid robot, Tanaka
et al. (2007) found that social connectedness correlated with the
amount of touch between the child and robot. In a study where
participants were asked to brush off “dirt” from either virtual
objects or virtual humans, they touched virtual humans with
6
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
less force than non-human objects, and they touched the face
of a virtual human with less force than the torso, while male
virtual humans were touched with more force than female virtual humans (Bailenson and Yee 2008). Huisman et al. (2014b)
performed a study in which participants played a collaborative
augmented reality game together with two virtual agents, visible in
the same augmented reality space. During interaction, one of the
virtual agents touched the user on the arm by means of a vibrotactile display. They found that the touching virtual agent was
rated higher on affective adjectives than the non-touching agent.
Finally, Nakagawa et al. (2011) created a situation in which a robot
requested participants to perform a repetitive monotonous task.
This request was accompanied by an active touch, a passive touch,
or no touch. The result showed that the active touch increased
people’s motivation to continue performing the monotonous task.
This confirms the earlier finding of Haans and IJsselsteijn (2009a)
that the effect of the virtual Midas touch is in the same order of
magnitude as the real Midas touch effect.
may stimulate the further development of mediated social touch
devices. Another research topic is the presumed close link between
social touch and emotions and the potential underlying neurophysiological mechanisms, i.e., the connection between social
touch and the emotional brain.
Multisensory and Contextual Cues
The meaning and appreciation of touch critically depend on its
context (Collier 1985; Camps et al. 2012), such as the relation
between conversation partners (Burgoon et al. 1992; Thompson
and Hampton 2011), the body location of the touch (Nguyen et al.
1975), and the communication partner’s culture (McDaniel and
Andersen 1998). There is no one-to-one correspondence between
a touch and its meaning (Jones and Yarbrough 1985). Hence, the
touch channel should be coupled with other sensory channels
to clarify its meaning (Wang and Quek 2010). An important
research question is which multisensory and contextual cues are
critical. Direct (i.e., unmediated) touch is usually a multisensory
experience: during interpersonal touch, we typically experience
not only tactile stimulation but also changes in warmth along with
verbal and non-verbal visual, auditory, and olfactory signals. Nonverbal cues (when people both see, hear, feel, and possibly smell
their interaction partner performing the touching) may render
mediated haptic technology more transparent, thereby increasing
perceived social presence and enhancing the convincingness or
immediacy of social touch (Haans and IJsselsteijn 2009b, 2010).
Also, since the sight of touch activates brain regions involved in
somatosensory processing [Rolls (2010); even watching a videotaped version: Walker and McGlone (2015)], the addition of visual
feedback may enhance the associated haptic experience. Another
strong cue for physical presence is body warmth. In human social
interaction, physical temperature also plays an important role
in sending interpersonal warmth (trust) information. Thermal
stimuli may therefore serve as a proxy for social presence and
stimulate the establishment of social relationships (IJzerman and
Semin 2010).
In addition to these bottom-up, stimulus driven aspects, topdown factors like expectations/beliefs of the receiver should be
accounted for (e.g., beliefs about the intent of the interaction
partner, familiarity with the partner, affordances of a physically
embodied agent, etc.) since they shape the perceived meaning
of touch (Burgoon and Walther 1990; Gallace and Spence 2010;
Suhonen et al. 2012b).
Research Topics
Mediated social touch is a relatively young field of research
that has the potential to substantially enrich human–human and
human–system interaction. Although it is still not clear to what
extent mediated touch can reproduce real touch, converging evidence seems to show that mediated touch shares important effects
with real touch. However, many studies have an anecdotal character without solid and/or generalizable conclusions and the key
studies in this field have not been replicated yet. This does not necessarily mean that the results are erroneous but it indicates that the
field has not matured enough and may suffer from a publication
bias. We believe that we need advancements in the following four
areas for the field to mature: building an overarching framework,
developing social touch basic building blocks, improving current
research methodologies, and solving specific ICT challenges.
Framework
The human skin in itself is a complex organ able to process many
different stimulus dimensions such as pressure, vibration, stretch,
and temperature (van Erp 2007). “Social touch” is what the brain
makes of these stimulus characteristics (sensations) taking into
account personality, previous experiences, social conventions, the
context, the object or person providing the touch, and probably
many more factors. The scientific domains involved in social
touch each have interesting research questions and answering
them helps the understanding of (real life or mediated) social
touch. In addition, we need an overarching framework to link
the results across disciplines, to foster multidisciplinary research,
and to encourage the transition from exploratory research to
hypothesis driven research.
Social and Cultural
Social touch has a strong (unwritten) etiquette (Cranny-Francis
2011). Important questions are how to develop a touch etiquette
for mediated touch and for social agents that can touch (van
Erp and Toet 2013), and how to incorporate social, cultural, and
individual differences with respect to acceptance and meaning
of a mediated or social agent’s touch. Individual differences may
include gender, attitude toward robots, and technology and touch
receptivity [the (dis-)liking of being touched, Bickmore et al.
2010]. An initial set of guidelines for this etiquette is given by
van Erp and Toet (2013). In addition, we should consider possible
ethical implications of the technology, ranging from affecting
people’s behavior without them being aware of it to the threat of
physical abuse “at a distance.”
Neuroscience
The recent finding that there exists a distinct somatotopic mapping between tactile sensations and different emotional feelings
(Nummenmaa et al. 2013; Walker and McGlone 2015) suggests
that it may also be of interest to determine a map of our responsiveness to interpersonal (mediated) touch across the skin surface (Gallace and Spence 2010). The availability of such a map
Frontiers in Digital Humanities | www.frontiersin.org
7
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Social Touch Building Blocks
Effect Measures
Gallace and Spence (2010) noted that even the most advanced
devices will not be able to deliver something that can approximate
realistic interpersonal touch if we do not know exactly what needs
to be communicated and how to communicate it. Our touch capabilities are very complex, and like mediated vision and audition,
mediated touch will always be degraded compared to real touch.
The question is how this degradation affects the effects aimed for.
A priori, mediated haptic communication should closely resemble non-mediated communication in order to be intuitively processed without introducing ambiguity or increasing the cognitive
load (Rantala et al. 2011). However, the results discussed in this
paper [e.g., Bailenson et al. (2007), Smith and MacLean (2007),
Haans and IJsselsteijn (2009a), Giannopoulos et al. (2011), and
Rantala et al. (2013)] indicate that social touch is quite robust to
degradations and it may not be necessary to mediate all physical
parameters accurately or at all.
However, it is currently not even clear how we can haptically
represent valence and arousal, let alone that we have robust knowledge on which parameters of the rich and complex touch characteristics are crucial in relation to the intended effects. Ideally, we
have a set of building blocks of social touch that can be applied
and combined depending on the situation.
Social touch can evoke effects at many different levels in the
receiver: physiological, psychological, behavioral, and social, and
it is likely that effects at these different levels also interact. For
instance, (social) presence and emotions can reciprocally reinforce each other. Currently, a broad range of effect measures is
applied, which makes it difficult to compare results, assess interactions between levels, and combine experimental results into
an integrated perspective. This pleads for setting a uniform set
of validated and standardized measures that covers the different
levels and that is robust and sensitive to the hypothesized effects of
social touch. This set could include basic physiological measures
known to vary with emotional experience [e.g., heart rate variability and skin conductance; Hogervorst et al. 2014]; psychological
and social measures reflecting trust, proximity, togetherness, and
social presence (IJsselsteijn et al. 2003; Van Bel et al. 2008; van Bel
et al. 2009), and behavioral measures, e.g., quantifying compliance
and performance. Please note though that each set of measures
will have its own pitfalls. For instance, see Brouwer et al. (2015)
for a critical reflection on the use of neurophysiological measures
to assess cognitive or mental state, and Bailenson and Yee (2008)
on the use of self-report questionnaires.
Specific ICT Challenges
Enabling ICT mediated, generated, and/or interpreted social
touch requires specific ICT knowledge and technology. We consider the following issues as most prominent.
Methodology
Not uncommon for research in the embryonic stage, mediated
social touch research is going through a phase of haphazard,
anecdotal studies demonstrating the concept and its’ potential. To
mature, the field needs rigorous replication and methodological
well-designed studies and protocols. The multidisciplinary nature
of the field adds to the diversity in research approaches.
Understanding Social Touches
With a few exceptions, mediated social touch studies are restricted
to producing a social touch and investigate its effects on a user.
To use social touch in interaction means that the system should
not only be able to generate social touches but also to receive
and understand social touches provided by human users. Taken
the richness of human touch into account, this is not trivial.
We may currently not even have the necessary sensor suite to
capture a social touch adequately, including parameters like sheer
and tangential forces, compliance, temperature, skin stretch, etc.
After adequate capturing, algorithms should determine the social
appraisal of the touch. Currently, the first attempts to capture
social touches with different emotional values on a single body
location (e.g., the arm) and to use computer algorithms to classify
them are undertaken (van Wingerden et al. 2014).
Controlled Studies
Only few studies have actually investigated mediated affect
conveyance, and compared mediated with unmediated touch.
Although it appears that mediated social touch can indeed to
some extent convey emotions (Bailenson et al. 2007) and induce
pro-social behavior [e.g., the Midas effect; Haans and IJsselsteijn
(2009a)], it is still not known to what extent it can also elicit strong
affective experiences (Haans and IJsselsteijn 2006) and how this all
compares to real touch or other control conditions.
Context Aware Computing and Social Signal
Processing
Protocols
Previous studies on mediated haptic interpersonal communication mainly investigated the communication of deliberately performed (instructed) rather than naturally occurring emotions
(Bailenson et al. 2007; Smith and MacLean 2007; Rantala et al.
2013). Although this protocol is very time efficient, it relies heavily
on participants’ ability to spontaneously generate social touches
with, for instance, a specific emotional value. This is comparable
to the research domain of facial expression where often trained
actors are used to produce expressions on demand. One may consider training people in producing social touches on demand or
employ a protocol (scenario) that naturally evokes specific social
signals rather than instruct naïve participants to produce them.
Frontiers in Digital Humanities | www.frontiersin.org
The meaning of a social touch is highly dependent on the accompanying verbal and non-verbal signals of the sender and the
context in which the touch is applied. An ICT system involved
in social touch interaction should take the relevant parameters
into account, both in generating touch and in interpreting touch.
To understand and manage social signals of a person, the system
is communicating with is the main challenge in the – in itself
relatively young – field of social signal processing (Vinciarelli et al.
2008). Context aware (Schilit et al. 1994) implies that the system
can sense its environment and reason about it in the context of
social touch.
8
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Current ICT advances like the embodiment of artificial entities,
the development of advanced haptic and tactile display technologies and standards (van Erp et al. 2010, including initial guidelines for mediated social touch: van Erp and Toet 2013) enable
the exploration of new ICT systems that employ this powerful
communication option, for instance, to enhance communication
between physically separated partners and increase trust in and
compliance with artificial entities. There are two prerequisites to
make these applications viable. First, inter-human social touch can
be ICT mediated, and second, social touch can be ICT generated
and understood, all without loss of effectiveness, efficiency, and
user satisfaction.
In this paper, we show that there is converging evidence that
both prerequisites can be met. Mediated social touch shows effects
at aforementioned levels, and these effects resemble those of a
real touch, even if the mediated touch is severely degraded. We
also report the first indications that a social touch can be generated by an artificial entity, although the evidence base is still
small. Moreover, the first steps are taken to develop algorithms
to automatically classify social touches produced by the user.
Our review also shows that (mediated) social touch is an
embryonic field relying for a large part on technology demonstrations with only a few systematic investigations. To advance
the field, we believe the focus should be on the following four
activities: developing an overarching framework (integrating neuroscience, computer science, and social and behavioral science),
developing basic social touch building blocks (based on the critical
social touch parameters), applying stricter research methodologies (use controlled studies, validated protocols, and standard
effect measures), and realizing breakthroughs in ICT (classifying
social touches, context aware computing, social signal processing,
congruence, and enhancing touch cues).
When we are successful in managing these challenges at the
crossroads of ICT and psychology, we believe that (mediated)
social touch can improve our wellbeing and quality of life, can
bridge the gap between real and virtual (social) worlds, and can
make artificial entities more human-like.
Congruency in Time, Space, and Semantics
As with most multimodal interactions, congruency of the signals in space, time, and meaning is of eminent importance. For
instance, touches should be congruent with other (mediated)
display modalities (visual, auditory, olfactory) to communicate
the intended meaning. In addition, congruence in time and space
between, for instance, a seen gesture and a resulting haptic sensation is required to support a common interaction metaphor
based on real touch. It has been shown that combining mediated
social touch with morphologically congruent imagery enhances
perceived social presence, whereas incongruent imagery results in
lower degrees of social presence (Haans and IJsselsteijn 2010).
Especially in closed-loop interaction (e.g., when holding or
shaking hands), signals that are out of sync may severely degrade
the interaction, thus requiring (near) real-time processing of touch
and other social signals and generation of adequate social touches
in reaction.
Enhancing Touch Cues
Social touch seems robust to degradations and mediated touch
does not need to replicate all physical parameters accurately. The
flipside of degradation is enhancement. Future research should
investigate to what extent the affective quality of the mediated
touch signals can be enhanced by the addition of other communication channels or by controlling specific touch parameters. Touch
parameters do not necessarily have to be mediated one-to-one,
but, for instance, temperature and force profiles may be either
amplified or attenuated. The additional options mediation can
provide to social touch have not been explored yet.
Conclusion
Social touch is of eminent importance in inter-human social communication and grounded in specific neurophysiological processing channels. Social touch can have effects at many levels including
physiological (heart rate and hormone levels), psychological (trust
in others), and sociological (pro-social behavior toward others).
In Proceedings of the International Conference on Computer Technology and
Development (ICCTD ‘09), 416–420. Piscataway, NJ: IEEE Press.
Bickmore, T.W., Fernando, R., Ring, L., and Schulman, D. 2010. Empathic touch by
relational agents. IEEE Trans. Affect. Comput. 1: 60–71. doi:10.1109/T-AFFC.
2010.4
Bickmore, T.W., and Picard, R.W. 2005. Establishing and maintaining longterm human-computer relationships. ACM Trans. Comput. Hum. Interact. 12:
293–327. doi:10.1145/1067860.1067867
Bohil, C.J., Alicea, B., and Biocca, F.A. 2011. Virtual reality in neuroscience research
and therapy. Nat. Rev. Neurosci. 12: 752–62. doi:10.1038/nrn3122
Bonanni, L., Vaucelle, C., Lieberman, J., and Zuckerman, O. 2006. TapTap: a haptic
wearable for asynchronous distributed touch therapy. In Proceedings of the
ACM Conference on Human Factors in Computing Systems CHI ‘06, 580–585.
New York, NY: ACM.
Breazeal, C. 2011. Social robots for health applications. In Proceedings of the IEEE
2011 Annual International Conference on Engineering in Medicine and Biology
(EMBC), 5368–5371. Piscataway, NJ: IEEE.
Brouwer, A.-M., Zander, T.O., van Erp, J.B.F., Korteling, J.E., and Bronkhorst,
A.W. 2015. Using neurophysiological signals that reflect cognitive or affective
state: six recommendations to avoid common pitfalls. Front. Neurosci. 9:136.
doi:10.3389/fnins.2015.00136
Burgoon, J.K. 1991. Relational message interpretations of touch, conversational
distance, and posture. J. Nonverbal Behav. 15:233–59. doi:10.1007/BF00986924
References
Altun, K., and MacLean, K.E. 2014. Recognizing affect in human touch of a robot.
Pattern Recognit. Lett. doi:10.1016/j.patrec.2014.10.016
App, B., McIntosh, D.N., Reed, C.L., and Hertenstein, M.J. 2011. Nonverbal channel
use in communication of emotion: how may depend on why. Emotion 11:
603–17. doi:10.1037/a0023164
Argall, B.D., and Billard, A.G. 2010. A survey of tactile human-robot interactions.
Rob. Auton. Syst. 58: 1159–76. doi:10.1016/j.robot.2010.07.002
Argyle, M. 1975. Bodily Communication. 2nd ed. London, UK: Methuen.
Bailenson, J.N., and Yee, N. 2007. Virtual interpersonal touch and digital
chameleons. J. Nonverbal Behav. 31: 225–42. doi:10.1007/s10919-007-0034-6
Bailenson, J.N., and Yee, N. 2008. Virtual interpersonal touch: haptic interaction
and copresence in collaborative virtual environments. Multimed. Tools Appl. 37:
5–14. doi:10.1007/s11042-007-0171-2
Bailenson, J.N., Yee, N., Brave, S., Merget, D., and Koslow, D. 2007. Virtual interpersonal touch: expressing and recognizing emotions through haptic devices. Hum.
Comput. Interact. 22: 325–53. doi:10.1080/07370020701493509
Basdogan, C., Ho, C.-H., Srinivasan, M.A., and Slater, M. 2000. An experimental
study on the role of touch in shared virtual environments. ACM Trans. Comput.
Hum. Interact. 7: 443–60. doi:10.1145/365058.365082
Basori, A.H., Bade, A., Sunar, M.S., Daman, D., and Saari, N. 2009. Haptic vibration
for emotional expression of avatar to enhance the realism of virtual reality.
Frontiers in Digital Humanities | www.frontiersin.org
9
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Burgoon, J.K., and Walther, J.B. 1990. Nonverbal expectancies and the evaluative consequences of violations. Hum. Comm. Res. 17: 232–65. doi:10.1111/j.
1468-2958.1990.tb00232.x
Burgoon, J.K., Walther, J.B., and Baesler, E.J. 1992. Interpretations, evaluations, and
consequences of interpersonal touch. Hum. Comm. Res. 19: 237–63. doi:10.1111/
j.1468-2958.1992.tb00301.x
Bush, E. 2001. The use of human touch to improve the well-being of older adults:
A holistic nursing intervention. J. Holist. Nurs. 19(3):256–270. doi:10.1177/
089801010101900306
Cabibihan, J.-J., Ahmed, I., and Ge, S.S. 2011. Force and motion analyses of the
human patting gesture for robotic social touching. In Proceedings of the 2011
IEEE 5th International Conference on Cybernetics and Intelligent Systems (CIS),
165–169. Piscataway, NJ: IEEE.
Cabibihan, J.-J., Jegadeesan, R., Salehi, S., and Ge, S.S. 2010. Synthetic skins with
humanlike warmth. In Social Robotics, Edited by S. Ge, H. Li, J.J. Cabibihan, and
Y. Tan, 362–371. Berlin: Springer.
Cabibihan, J.-J., Pradipta, R., Chew, Y., and Ge, S. 2009. Towards humanlike social
touch for prosthetics and sociable robotics: Handshake experiments and finger
phalange indentations. In Advances in Robotics, Edited by J.H. Kim, S. Ge, P.
Vadakkepat, N. Jesse, A. Al Manum, K. Puthusserypady, et al., 73–79 Berlin:
Springer. doi:10.1007/978-3-642-03983-6_11
Cabibihan, J.-J., Zheng, L., and Cher, C.K.T. 2012. Affective tele-touch. In Social
Robotics, Edited by S. Ge, O. Khatib, J.J. Cabibihan, R. Simmons, and M.A.
Williams, 348–356. Berlin: Springer.
Calvo, R.A., and D’Mello, S. 2010. Affect detection: an interdisciplinary review of
models, methods, and their applications. IEEE Trans. Affect. Comput. 1: 18–37.
doi:10.1109/T-AFFC.2010.1
Camps, J., Tuteleers, C., Stouten, J., and Nelissen, J. 2012. A situational touch: how
touch affects people’s decision behavior. Social Influence 8: 237–50. doi:10.1080/
15534510.2012.719479
Cannon, W.B. 1927. The James-Lange theory of emotions: a critical examination
and an alternative theory. Am. J. Psychol. 39: 106–24. doi:10.2307/1415404
Cha, J., Eid, M., Barghout, A., and Rahman, A.M. 2008. HugMe: an interpersonal
haptic communication system. In IEEE International Workshop on Haptic Audio
visual Environments and Games (HAVE 2008), 99–102. Piscataway, NJ: IEEE.
Chang, S.O. 2001. The conceptual structure of physical touch in caring. J. Adv. Nurs.
33(6): 820–827. doi:10.1046/j.1365-2648.2001.01721.x
Chatel-Goldman, J., Congedo, M., Jutten, C., and Schwartz, J.L. 2014. Touch
increases autonomic coupling between romantic partners. Front. Behav. Neurosci. 8:95. doi:10.3389/fnbeh.2014.00095
Chen, T.L., King, C.-H., Thomaz, A.L., and Kemp, C.C. 2011. Touched by a robot: an
investigation of subjective responses to robot-initiated touch. In Proceedings of
the 6th International Conference on Human-Robot Interaction HRI ‘11, 457–464.
New York, NY: ACM.
Chung, K., Chiu, C., Xiao, X., and Chi, P.Y.P. 2009. Stress outsourced: a haptic social
network via crowdsourcing. In CHI ‘09 Extended Abstracts on Human Factors
in Computing Systems, Edited by D.R. Olsen, K. Hinckley, M. Ringel-Morris,
S. Hudson and S. Greenberg, 2439–2448. New York, NY: ACM. doi:10.1145/
1520340.1520346
Coan, J.A., Schaefer, H.S., and Davidson, R.J. 2006. Lending a hand: social regulation of the neural response to threat. Psychol. Sci. 17: 1032–9. doi:10.1111/j.
1467-9280.2006.01832.x
Collier, G. 1985. Emotional Expression. Hillsdale, NJ: Lawrence Erlbaum Associates
Inc.
Cooney, M.D., Nishio, S., and Ishiguro, H. 2012. Recognizing affection for a touchbased interaction with a humanoid robot. In IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS), 1420–1427. Piscataway, NJ: IEEE.
Cooney, M.D., Nishio, S., and Ishiguro, H. 2014. Importance of touch for conveying
affection in a multimodal interaction with a small humanoid robot. Int. J. Hum.
Rob. 12(1): 1550002. doi:10.1142/S0219843615500024
Cooper, E.A., Garlick, J., Featherstone, E., Voon, V., Singer, T., Critchley, H.D.,
et al. 2014. You turn me cold: evidence for temperature contagion. PLoS One
9:e116126. doi:10.1371/journal.pone.0116126
Cramer, H., Kemper, N., Amin, A., Wielinga, B., and Evers, V. 2009. “Give me a
hug”: the effects of touch and autonomy on people’s responses to embodied
social agents. Comput. Anim. Virtual Worlds 20: 437–45. doi:10.1002/cav.317
Cranny-Francis, A. 2011. Semefulness: a social semiotics of touch. Soc. Semiotics 21:
463–81. doi:10.1080/10350330.2011.591993
Frontiers in Digital Humanities | www.frontiersin.org
Crusco, A.H., and Wetzel, C.G. 1984. The midas touch: the effects of interpersonal touch on restaurant tipping. Pers. Soc. Psychol. B 10: 512–7. doi:10.1177/
0146167284104003
Dahiya, R.S., Metta, G., Valle, M., and Sandini, G. 2010. Tactile sensing – from
humans to humanoids. IEEE Trans. Rob. 26: 1–20. doi:10.1109/TRO.2009.
2033627
Damasio, A. 1999. The Feeling of What Happens: Body and Emotion in the Making
of Consciousness. London, UK: Heinemann.
Debrot, A., Schoebi, D., Perrez, M., and Horn, A.B. 2013. Touch as an interpersonal emotion regulation process in couples’ daily lives: the mediating
role of psychological intimacy. Pers. Soc. Psychol. B 39: 1373–85. doi:10.1177/
0146167213497592
Dev, P., Youngblood, P., Heinrichs, W.L., and Kusumoto, L. 2007. Virtual worlds
and team training. Anesthesiol. Clin. 25: 321–36. doi:10.1016/j.anclin.2007.03.
001
Dijk, E.O., Nijholt, A., van Erp, J.B.F., Wolferen, G.V., and Kuyper, E. 2013. Audiotactile stimulation: a tool to improve health and well-being? Int. J. Auton. Adapt.
Commun. Syst. 6: 305–23. doi:10.1504/IJAACS.2013.056818
Ditzen, B., Neumann, I.D., Bodenmann, G., von Dawans, B., Turner, R.A., Ehlert,
U., et al. 2007. Effects of different kinds of couple interaction on cortisol and
heart rate responses to stress in women. Psychoneuroendocrinology 32: 565–74.
doi:10.1016/j.psyneuen.2007.03.011
Dolin, D.J., and Booth-Butterfield, M. 1993. Reach out and touch someone: analysis of nonverbal comforting responses. Commun. Q. 41: 383–93. doi:10.1080/
01463379309369899
Dougherty, E., and Scharfe, H. 2011. Initial formation of trust: designing an interaction with geminoid-DK to promote a positive attitude for cooperation. In
Social Robotics, Edited by B. Mutlu, C. Bartneck, J. Ham, V. Evers, and T. Kanda,
95–103. Berlin: Springer.
Drescher, V.M., Gantt, W.H., and Whitehead, W.E. 1980. Heart rate response to
touch. Psychosom. Med. 42: 559–65. doi:10.1097/00006842-198011000-00004
Eichhorn, E., Wettach, R., and Hornecker, E. 2008. A stroking device for spatially
separated couples. In Proceedings of the 10th International Conference on Human
Computer Interaction with Mobile Devices and Services Mobile HCI ‘08, 303–306.
New York, NY: ACM.
Ellingsen, D.M., Wessberg, J., Chelnokova, O., Olausson, H., Aeng, B., and Eknes,
S. 2014. In touch with your emotions: oxytocin and touch change social impressions while others’ facial expressions can alter touch. Psychoneuroendocrinology
39: 11–20. doi:10.1016/j.psyneuen.2013.09.017
Field, T. 2010. Touch for socioemotional and physical well-being: a review. Dev. Rev.
30: 367–83. doi:10.1016/j.dr.2011.01.001
Fisher, J.D., Rytting, M., and Heslin, R. 1976. Hands touching hands: affective and
evaluative effects of an interpersonal touch. Sociometry 39: 416–21. doi:10.2307/
3033506
Flowers, M.G., and Aggarwal, R. 2014. Second LifeTM : a novel simulation platform
for the training of surgical residents. Expert Rev. Med. Devices 11: 101–3. doi:10.
1586/17434440.2014.863706
Fong, T., Nourbakhsh, I., and Dautenhahn, K. 2003. A survey of socially
interactive robots. Rob. Auton. Syst. 42: 143–66. doi:10.1016/S0921-8890(02)
00372-X
Frumin, I., Perl, O., Endevelt-Shapira, Y., Eisen, A., Eshel, N., Heller, I., et al.
2015. A social chemosignaling function for human handshaking. Elife 4: e05154.
doi:10.7554/eLife.05154
Furukawa, M., Kajimoto, H., and Tachi, S. 2012. KUSUGURI: a shared tactile
interface for bidirectional tickling. In Proceedings of the 3rd Augmented Human
International Conference AH ‘12, 1–8. New York, NY: ACM.
Gallace, A., Ngo, M.K., Sulaitis, J., and Spence, C. 2012. Multisensory presence in
virtual reality: possibilities & limitations. In Multiple Sensorial Media Advances
and Applications: New Developments in MulSeMedia, Edited by G. Ghinea, F.
Andres and S.R. Gulliver, 1–40. Vancouver, BC: IGI Global.
Gallace, A., and Spence, C. 2010. The science of interpersonal touch: an overview.
Neurosci. Biobehav. Rev. 34: 246–59. doi:10.1016/j.neubiorev.2008.10.004
Gao, Y., Bianchi-Berthouze, N., and Meng, H. 2012. What does touch tell us about
emotions in touchscreen-based gameplay? ACM Trans. Comput. Hum. Interact.
19: 1–30. doi:10.1145/2395131.2395138
Giannopoulos, E., Wang, Z., Peer, A., Buss, M., and Slater, M. 2011. Comparison of
people’s responses to real and virtual handshakes within a virtual environment.
Brain Res. Bull. 85: 276–82. doi:10.1016/j.brainresbull.2010.11.012
10
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Gleeson, M., and Timmins, F. 2005. A review of the use and clinical effectiveness of
touch as a nursing intervention. Clin. Effect. Nurs. 9: 69–77. doi:10.1016/j.cein.
2004.12.002
Gooch, D., and Watts, L. 2010. Communicating social presence through thermal
hugs. In Proceedings of First Workshop on Social Interaction in Spatially Separated
Environments (SISSI2010), Edited by F. Schmid, T. Hesselmann, S. Boll, K.
Cheverst, and L. Kulik, 11–19. Copenhagen: International Society for Presence
Research.
Gooch, D., and Watts, L. 2012. Yourgloves, hothands and hotmits: devices to hold
hands at a distance. In Proceedings of the 25th Annual ACM Symposium on User
Interface Software and Technology UIST ‘12, 157–166. New York, NY: ACM.
Gordon, I., Voos, A.C., Bennett, R.H., Bolling, D.Z., Pelphrey, K.A. and Kaiser,
M.D. 2013. Brain mechanisms for processing affective touch. Hum. Brain Mapp.
34(4):914–922. doi:10.1002/hbm.21480
Gottlieb, G. 1971. Ontogenesis of sensory function in birds and mammals. In The
Biopsychology of Development, Edited by E. Tobach, L.R. Aronson, and E. Shaw,
67–128. New York, NY: Academic Press.
Grewen, K.M., Anderson, B.J., Girdler, S.S., and Light, K.C. 2003. Warm partner
contact is related to lower cardiovascular reactivity. Behav. Med. 29: 123–30.
doi:10.1080/08964280309596065
Guéguen, N. 2002. Touch, awraness of touch, and compliance with a request.
Percept. Mot. Skills 95: 355–60. doi:10.2466/pms.2002.95.2.355
Guéguen, N. 2004. Nonverbal encouragement of participation in a course: the effect
of touching. Soc. Psychol. Educ. 7: 89–98. doi:10.1023/B:SPOE.0000010691.
30834.14
Guéguen, N., and Fischer-Lokou, J. 2003. Tactile contact and spontaneous help:
an evaluation in a natural setting. J. Soc. Psychol. 143: 785–7. doi:10.1080/
00224540309600431
Guéguen, N., Meineri, S., and Charles-Sire, V. 2010. Improving medication adherence by using practitioner nonverbal techniques: a field experiment on the effect
of touch. J. Behav. Med. 33: 466–73. doi:10.1007/s10865-010-9277-5
Haans, A., and IJsselsteijn, W.A. 2006. Mediated social touch: a review of current research and future directions. Virtual Reality 9: 149–59. doi:10.1007/
s10055-005-0014-2
Haans, A., and IJsselsteijn, W.A. (2009a). The virtual Midas Touch: helping behavior
after a mediated social touch. IEEE Trans. Haptics 2: 136–40. doi:10.1109/TOH.
2009.20
Haans, A., and IJsselsteijn, W.A. (2009b). I’m always touched by your presence, dear:
combining mediated social touch with morphologically correct visual feedback.
In Proceedings of Presence 2009, 1–6. Los Angeles, CA: International Society for
Presence Research.
Haans, A., and IJsselsteijn, W.A. 2010. Combining mediated social touch with
vision: from self-attribution to telepresence? In Proceedings of Special Symposium
at EuroHaptics 2010: Haptic and Audio-Visual Stimuli: Enhancing Experiences
and Interaction, Edited by A. Nijholt, E.O. Dijk, and P.M.C. Lemmens, 35–46
Enschede:University of Twente.
Henricson, M., Ersson, A., Määttä, S., Segesten, K., and Berglund, A.L. 2008. The
outcome of tactile touch on stress parameters in intensive care: A randomized
controlled trial. Complement. Ther. Clin. Prac. 14(4): 244–254.
Harlow, H.F., and Zimmermann, R.R. 1959. Affectional responses in the infant
monkey; orphaned baby monkeys develop a strong and persistent attachment
to inanimate surrogate mothers. Science 130: 421–32. doi:10.1126/science.130.
3373.421
Hertenstein, M.J., Holmes, R., McCullough, M., and Keltner, D. 2009. The communication of emotion via touch. Emotion 9: 566–73. doi:10.1037/a0016108
Hertenstein, M.J., Keltner, D., App, B., Bulleit, B.A., and Jaskolka, A.R. 2006. Touch
communicates distinct emotions. Emotion 6: 528–33. doi:10.1037/1528-3542.6.
3.528
Hogervorst, M.A., Brouwer, A.-M., and van Erp, J.B.F. 2014. Combining and comparing EEG, peripheral physiology and eye-related measures for the assessment
of mental workload. Front. Neurosci. 8:322. doi:10.3389/fnins.2014.00322
Hornik, J. 1992. Tactile stimulation and consumer response. J. Consum. Res. 19:
449–58. doi:10.1086/209314
Hossain, S.K.A., Rahman, A.S.M.M., and El Saddik, A. 2011. Measurements of
multimodal approach to haptic interaction in second life interpersonal communication system. IEEE Trans. Instrum. Meas. 60: 3547–58. doi:10.1109/TIM.
2011.2161148
Huisman, G., Bruijnes, M., Kolkmeier, J., Jung, M.M., Darriba Frederiks, A., and
Rybarczyk, Y. (2014a). Touching virtual agents: embodiment and mind. In
Innovative and Creative Developments in Multimodal Interaction Systems, Edited
Frontiers in Digital Humanities | www.frontiersin.org
by Y. Rybarczyk, T. Cardoso, J. Rosas, and L. Camarinha-Matos, 114–138.
Berlin: Springer.
Huisman, G., Kolkmeier, J., and Heylen, D. (2014b). Simulated social touch in a collaborative game. In Haptics: Neuroscience, Devices, Modeling, and Applications,
Edited by M. Auvray and C. Duriez, 248–256. Berlin: Springer.
Huisman, G., Darriba Frederiks, A., Van Dijk, E.M.A.G., Kröse, B.J.A., and Heylen,
D.K.J. 2013. Self touch to touch others: designing the tactile sleeve for social
touch. In Online Proceedings of TEI’13. Available at: http://www.tei-conf.org/13/
sites/default/files/page-files/Huisman.pdf
Hutson, S., Lim, S., Bentley, P.J., Bianchi-Berthouze, N., and Bowling, A. 2011.
Investigating the suitability of social robots for the wellbeing of the elderly. In
Affective Computing and Intelligent Interaction, Edited by S. D’Mello, A. Graesser,
B. Schuller, and J.C. Martin, 578–587. Berlin: Springer.
IJsselsteijn, W.A., van Baren, J., and van Lanen, F. 2003. Staying in touch: social
presence and connectedness through synchronous and asynchronous communication media (Part III). In Human-Computer Interaction: Theory and Practice,
Edited by C. Stephanidis and J. Jacko, 924–928. Boca Raton, FL: CRC Press.
IJzerman, H., and Semin, G.R. 2009. The thermometer of social relations: mapping social proximity on temperature. Psychol. Sci. 20: 1214–20. doi:10.1111/j.
1467-9280.2009.02434.x
IJzerman, H., and Semin, G.R. 2010. Temperature perceptions as a ground for social
proximity. J. Exp. Soc. Psychol. 46: 867–73. doi:10.1016/j.jesp.2010.07.015
James, W. 1884. What is an emotion? Mind 9: 188–205. doi:10.1093/mind/os-IX.
34.188
Jones, S.E., and Yarbrough, A.E. 1985. A naturalistic study of the meanings of touch.
Comm. Monogr. 52: 19–56. doi:10.1080/03637758509376094
Joule, R.V., and Guéguen, N. 2007. Touch, compliance, and awareness of tactile
contact. Percept. Mot. Skills 104: 581–8. doi:10.2466/pms.104.2.581-588
Jung, M.M., Poppe, R., Poel, M., and Heylen, D.K.J. 2014. Touching the VOID –
introducing CoST: corpus of social touch. In Proceedings of the 16th International
Conference on Multimodal Interaction (ICMI ‘14), 120–127. New York, NY:
ACM.
Kanamori, M., Suzuki, M., Oshiro, H., Tanaka, M., Inoguchi, T., Takasugi, H.,
et al. 2003. Pilot study on improvement of quality of life among elderly using
a pet-type robot. In Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation, 107–112. Piscataway, NJ:
IEEE.
Khanna, P., and Sasikumar, M. 2010. Recognising emotions from keyboard stroke
pattern. Int. J. Comput. Appl. 11: 1–5. doi:10.5120/1614-2170
Kjeldskov, J., Gibbs, M., Vetere, F., Howard, S., Pedell, S., Mecoles, K., et al. 2004.
Using cultural probes to explore mediated intimacy. Australas J. Inf. Syst. 11.
Available at: http://journal.acs.org.au/index.php/ajis/article/view/128
Kleinke, C.L. 1977. Compliance to requests made by gazing and touching experimenters in field settings. J. Exp. Soc. Psychol. 13: 218–23. doi:10.1016/
0022-1031(77)90044-0
Knapp, M.L., and Hall, J.A. 2010. Nonverbal Communication in Human Interaction
(7th ed.). Boston, MA: Wadsworth, CENGAGE Learning.
Konijn, E.A., Utz, S., Tanis, M., and Barnes, S.B. 2008. Mediated Interpersonal
Communication. New York, NY: Routledge.
Kotranza, A., and Lok, B. 2008. Virtual human + tangible interface = mixed reality
human: an initial exploration with a virtual breast exam patient. In Proceedings
of the IEEE Virtual Reality Conference 2008 (VR ‘08), 99–106. Piscataway, NJ:
IEEE.
Kotranza, A., Lok, B., Deladisma, A., Pugh, C.M., and Lind, D.S. 2009. Mixed
reality humans: evaluating behavior, usability, and acceptability. IEEE Trans. Vis.
Comput. Graph. 15: 369–82. doi:10.1109/TVCG.2008.195
Kraus, M.W., Huang, C., and Keltner, D. 2010. Tactile communication, cooperation,
and performance: an ethological study of the NBA. Emotion 10: 745–9. doi:10.
1037/a0019382
Kristoffersson, A., Coradeschi, S., and Loutfi, A. 2013. A review of mobile
robotic telepresence. Adv. Hum. Comput. Int. 2013: 1–17. doi:10.1155/2013/
902316
Kuwamura, K., Sakai, K., Minato, T., Nishio, S., and Ishiguro, H. 2013. Hugvie: a
medium that fosters love. In The 22nd IEEE International Symposium on Robot
and Human Interactive Communication, 70–75. Gyeongju: IEEE.
Kvam, M.H. 1997. The effect of vibroacoustic therapy. Physiotherapy 83: 290–5.
doi:10.1016/S0031-9406(05)66176-7
Ledbetter, A.M. 2014. The past and future of technology in interpersonal communication theory and research. Commun. Stud. 65: 456–9. doi:10.1080/10510974.
2014.927298
11
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Lee, J.K., Stiehl, W.D., Toscano, R.L., and Breazeal, C. 2009. Semi-autonomous robot
avatar as a medium for family communication and education. Adv. Rob. 23:
1925–49. doi:10.1163/016918609X12518783330324
Lee, K.M., Jung, Y., Kim, J., and Kim, S.R. 2006. Are physically embodied social
agents better than disembodied social agents? The effects of physical embodiment, tactile interaction, and people’s loneliness in human-robot interaction.
Int. J. Hum. Comput. Stud. 64: 962–73. doi:10.1016/j.ijhcs.2006.05.002
Lemmens, P., Crompvoets, F., Brokken, D., van den Eerenbeemd, J., and de
Vries, G.J. 2009. A body-conforming tactile jacket to enrich movie viewing. In
EuroHaptics Conference, 2009 and Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems. World Haptics 2009. Third Joint, 7–12.
Piscataway, NJ: IEEE Press.
Li, H., Cabibihan, J.-J., and Tan, Y. 2011. Towards an effective design of social robots.
Int. J. Soc. Rob. 3: 333–5. doi:10.1007/s12369-011-0121-z
Löken, L.S., Wessberg, J., Morrison, I., McGlone, F., and Olausson, H. 2009. Coding
of pleasant touch by unmyelinated afferents in humans. Nat. Neurosci. 12(5):
547–548. doi:10.1038/nn.2312
Lv, H.-R., Lin, Z.-L., Yin, W.-J., and Dong, J. 2008. Emotion recognition based on
pressure sensor keyboards. In IEEE International Conference on Multimedia and
Expo 2008, 1089–1092. Piscataway, NJ: IEEE.
Master, S.L., Eisenberger, N.I., Taylor, S.E., Naliboff, B.D., Shirinyan, D., and Lieberman, M.D. 2009. A picture’s worth: partner photographs reduce experimentally
induced pain. Psychol. Sci. 20: 1316–8. doi:10.1111/j.1467-9280.2009.02444.x
McCall, C., and Blascovich, J. 2009. How, when, and why to use digital experimental
virtual environments to study social behavior. Soc. Pers. Psychol. Compass 3:
744–58. doi:10.1111/j.1751-9004.2009.00195.x
McCance, R.A., and Otley, M. 1951. Course of the blood urea in newborn rats, pigs
and kittens. J. Physiol. 113: 18–22. doi:10.1113/jphysiol.1951.sp004552
McDaniel, E., and Andersen, P.A. 1998. International patterns of interpersonal
tactile communication: a field study. J. Nonverbal Behav. 22: 59–75. doi:10.1023/
A:1022952509743
McGlone, F., Wessberg, J., and Olausson, H. 2014. Discriminative and affective touch: sensing and feeling. Neuron 82: 737–55. doi:10.1016/j.neuron.2014.
05.001
Mehrabian, A. 1972. Nonverbal Communication. Chicago, IL: Aldine-Atherton.
Mikropoulos, T.A., and Natsis, A. 2011. Educational virtual environments: a tenyear review of empirical research (1999-2009). Comput. Educ. 56: 769–80. doi:10.
1016/j.compedu.2010.10.020
Montagu, A. 1972. Touching: The Human Significance of the Skin. New York, NY:
Harper & Row Publishers.
Morrison, I., Löken, L., and Olausson, H. 2010. The skin as a social organ. Exp. Brain
Res. 204: 305–14. doi:10.1007/s00221-009-2007-y
Morrison, I., Björnsdotter, M., and Olausson, H. 2011. Vicarious responses to
social touch in posterior insular cortex are tuned to pleasant caressing speeds.
J. Neurosci. 31(26):9554–9562.
Mueller, F., Vetere, F., Gibbs, M.R., Kjeldskov, J., Pedell, S., and Howard, S. 2005.
Hug over a distance. In CHI ‘05 Extended Abstracts on Human Factors in
Computing Systems, Edited by G. van der Veer and C. Gale, 1673–1676. New
York, NY: ACM. doi:10.1145/1056808.1056994
Nakagawa, K., Shiomi, M., Shinozawa, K., Matsumura, R., Ishiguro, H., and
Hagita, N. 2011. Effect of robot’s active touch on people’s motivation. In Proceedings of the 6th International Conference on Human-Robot Interaction HRI
‘11, 465–472. New York, NY: ACM.
Nakanishi, H., Tanaka, K., and Wada, Y. 2014. Remote handshaking: touch enhances
video-mediated social telepresence. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems CHI’14, 2143–2152. New York, NY: ACM.
Nakanishi, J., Kuwamura, K., Minato, T., Nishio, S., and Ishiguro, H. 2013. Evoking
affection for a communication partner by a robotic communication medium. In
The First International Conference on Human-Agent Interaction (iHAI 2013), 1–8.
Available at: http://hai-conference.net/ihai2013/proceedings/pdf/III-1-4.pdf
Naya, F., Yamato, J., and Shinozawa, K. 1999. Recognizing human touching behaviors using a haptic interface for a pet-robot. In Conference Proceedings of the
IEEE International Conference on Systems, Man, and Cybernetics (IEEE SMC ‘99),
1030–1034. Piscataway, NJ: IEEE.
Nguyen, N., Wachsmuth, I., and Kopp, S. 2007. Touch perception and emotional
appraisal for a virtual agent. In Proceedings of the 2nd Workshop Emotion and
Computing-Current Research and Future Impact, Edited by D. Reichardt and
P. Levi, 17–22. Stuttgart: Berufsakademie Stuttgart.
Frontiers in Digital Humanities | www.frontiersin.org
Nguyen, T., Heslin, R., and Nguyen, M.L. 1975. The meanings of
touch: sex differences. J. Commun. 25: 92–103. doi:10.1111/j.1460-2466.
1975.tb00610.x
Nie, J., Park, M., Marin, A.L., and Sundar, S.S. 2012. Can you hold my hand? Physical
warmth in human-robot interaction. In Proceeedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 201–202. Piscataway,
NJ: IEEE.
Nijholt, A. 2014. Breaking fresh ground in human-media interaction research.
Front. ICT 1:4. doi:10.3389/fict.2014.00004
Nummenmaa, L., Glerean, E., Hari, R., and Hietanen, J.K. 2013. Bodily maps
of emotions. Proc. Natl. Acad. Sci. U.S.A. 111: 646–51. doi:10.1073/pnas.
1321664111
Olausson, H.W., Cole, J., Vallbo, Å, McGlone, F., Elam, M., Krämer, H.H., et al. 2008.
Unmyelinated tactile afferents have opposite effects on insular and somatosensory cortical processing. Neurosci. Lett. 436: 128–32. doi:10.1016/j.neulet.2008.
03.015
Paladino, M.P., Mazzurega, M., Pavani, F., and Schubert, T.W. 2010. Synchronous
multisensory stimulation blurs self-other boundaries. Psychol. Sci. 21: 1202–7.
doi:10.1177/0956797610379234
Park, E., and Lee, J. 2014. I am a warm robot: the effects of temperature
in physical human – robot interaction. Robotica 32: 133–42. doi:10.1017/
S026357471300074X
Park, Y.W., Bae, S.H., and Nam, T.J. 2012. How do Couples Use CheekTouch Over
Phone Calls? New York, NY: ACM. 763–6.
Park, Y.W., Hwang, S., and Nam, T.J. 2011. Poke: emotional touch delivery through
an inflatable surface over interpersonal mobile communications. In Adjunct
Proceedings of the 24th Annual ACM Symposium Adjunct on User Interface
Software and Technology UIST ‘11, 61–62. New York, NY: ACM.
Patrick, G. 1999. The effects of vibroacoustic music on symptom reduction. IEEE
Eng. Med. Biol. Mag. 18: 97–100. doi:10.1109/51.752987
Phelan, J.E. 2009. Exploring the use of touch in the psychotherapeutic setting: a phenomenological review. Psychotherapy (Chic) 46: 97–111. doi:10.1037/a0014751
Prisby, R.D., Lafage-Proust, M.-H., Malaval, L., Belli, A., and Vico, L. 2008. Effects
of whole body vibration on the skeleton and other organ systems in man and
animal models: what we know and what we need to know. Ageing Res. Rev. 7:
319–29. doi:10.1016/j.arr.2008.07.004
Puhan, M.A., Suarez, A., Cascio, C.L., Zahn, A., Heitz, M., and Braendli, O. 2006.
Didgeridoo playing as alternative treatment for obstructive sleep apnoea syndrome: randomised controlled trial. BMJ 332: 266–70. doi:10.1136/bmj.38705.
470590.55
Rahman, A.S.M.M., and El Saddik, A. 2011. HKiss: real world based haptic interaction with virtual 3D avatars. In Proceedings of the 2011 IEEE International
Conference on Multimedia and Expo (ICME), 1–6. Piscataway, NJ: IEEE.
Rantala, J., Raisamo, R., Lylykangas, J., Ahmaniemi, T., Raisamo, J., Rantala, J., et al.
2011. The role of gesture types and spatial feedback in haptic communication.
IEEE Trans. Haptics 4: 295–306. doi:10.1109/TOH.2011.4
Rantala, J., Salminen, K., Raisamo, R., and Surakka, V. 2013. Touch gestures in
communicating emotional intention via vibrotactile stimulation. Int. J. Hum.
Comput. Stud. 7: 679–90. doi:10.1016/j.ijhcs.2013.02.004
Réhman, S., and Liu, L. 2010. iFeeling: vibrotactile rendering of human emotions on
mobile phones. In Mobile Multimedia Processing, Edited by X. Jiang, M.Y. Ma,
and C. Chen, 1–20. Berlin: Springer.
Robinson, H., MacDonald, B., Kerse, N., and Broadbent, E. 2013. The psychosocial
effects of a companion robot: a randomized controlled trial. J. Am. Med. Dir.
Assoc. 14: 661–7. doi:10.1016/j.jamda.2013.02.007
Rolls, E.T. 2010. The affective and cognitive processing of touch, oral texture, and
temperature in the brain. Neurosci. Biobehav. Rev. 34: 237–45. doi:10.1016/j.
neubiorev.2008.03.010
Saadatian, E., Samani, H., Parsani, R., Pandey, A.V., Li, J., Tejada, L., et al. 2014.
Mediating intimacy in long-distance relationships using kiss messaging. Int. J.
Hum. Comput. Stud. 72: 736–46. doi:10.1016/j.ijhcs.2014.05.004
Sallnäs, E.L. 2010. Haptic feedback increases perceived social presence. In Haptics:
Generating and Perceiving Tangible Sensations, Part II, Edited by A.M. Kappers,
J.B. Erp, W.M. Bergmann Tiest, and F.C. Helm, 178–185. Berlin: Springer.
Salminen, K., Surakka, V., Lylykangas, J., Raisamo, R., Saarinen, R., Raisamo,
R., et al. 2008. Emotional and behavioral responses to haptic stimulation. In
Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors
in Computing Systems, 1555–1562. New York, NY: ACM Press.
12
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Scheele, D., Kendrick, K.M., Khouri, C., Kretzer, E., Schläpfer, T.E., StoffelWagner, B., et al. 2014. An oxytocin-induced facilitation of neural and emotional
responses to social touch correlates inversely with autism traits. Neuropsychopharmacology 39: 2078–85. doi:10.1038/npp.2014.78
Schilit, B., Adams, N., and Want, R. 1994. Context-aware computing applications. In
First Workshop on Mobile Computing Systems and Applications (WMCSA 1994),
85–90. Piscataway, NJ: IEEE.
Self, B.P., van Erp, J.B.F., Eriksson, L., and Elliott, L.R. 2008. Human factors issues
of tactile displays for military environments. In Tactile Displays for Navigation,
Orientation and Communication in Military Environments, Edited by J.B.F. van
Erp and B.P. Self, 3. Neuillu-sur-Seine: NATO RTO.
Shermer, M. 2004. A bounty of science. Sci. Am. 290: 33. doi:10.1038/
scientificamerican0204-33
Shin, H., Lee, J., Park, J., Kim, Y., Oh, H., and Lee, T. 2007. A tactile emotional
interface for instant messenger chat. In Proceedings of the 2007 Conference
on Human Interface, Edited by M.J. Smith and G. Salvendy, 166–175. Berlin:
Springer.
Smith, J., and MacLean, K. 2007. Communicating emotion through a haptic link:
design space and methodology. Int. J. Hum. Comput. Stud. 65: 376–87. doi:10.
1016/j.ijhcs.2006.11.006
Solon, O. 2015. These sex tech toys will blow your mind. WIRED. Available at:
http://www.wired.co.uk/news/archive/2014-06/27/sex-tech
Stiehl, W.D., Lieberman, J., Breazeal, C., Basel, L., Lalla, L., and Wolf, M. 2005.
Design of a therapeutic robotic companion for relational, affective touch. In
IEEE International Workshop on Robot and Human Interactive Communication
(ROMAN 2005), 408–415. Piscataway, NJ: IEEE.
Suhonen, K., Müller, S., Rantala, J., Väänänen-Vainio-Mattila, K., Raisamo, R., and
Lantz, V. (2012a). Haptically augmented remote speech communication: a study
of user practices and experiences. In Proceedings of the 7th Nordic Conference
on Human-Computer Interaction: Making Sense Through Design NordiCHI ‘12,
361–369. New York, NY: ACM.
Suhonen, K., Väänänen-Vainio-Mattila, K., and Mäkelä, K. (2012b). User experiences and expectations of vibrotactile, thermal and squeeze feedback in interpersonal communication. In Proceedings of the 26th Annual BCS Interaction
Specialist Group Conference on People and Computers BCS-HCI ‘12, 205–214.
New York, NY: ACM.
Suk, H.-J., Jeong, S.-H., Hang, T.-H., and Kwon, D.-S. 2009. Tactile sensation as
emotion elicitor. Kansei Eng. Int. 8(2): 147–52.
Takahashi, K., Mitsuhashi, H., Murata, K., Norieda, S., and Watanabe, K. 2011.
Improving shared experiences by haptic telecommunication. In 2011 International Conference on Biometrics and Kansei Engineering (ICBAKE), 210–215. Los
Alamitos, CA: IEEE. doi:10.1109/ICBAKE.2011.19
Tanaka, F., Cicourel, A., and Movellan, J.R. 2007. Socialization between toddlers
and robots at an early childhood education center. Proc. Natl. Acad. Sci. U.S.A.
104: 17954–8. doi:10.1073/pnas.0707769104
Teh, J.K.S., Cheok, A.D., Peiris, R.L., Choi, Y., Thuong, V., and Lai, S. 2008. Huggy
pajama: a mobile parent and child hugging communication system. In Proceedings of the 7th International Conference on Interaction Design and Children IDC
‘08, 250–257. New York, NY: ACM.
Thompson, E.H., and Hampton, J.A. 2011. The effect of relationship status on
communicating emotions through touch. Cognit. Emot. 25: 295–306. doi:10.
1080/02699931.2010.492957
Toet, A., van Erp, J.B.F., Petrignani, F.F., Dufrasnes, M.H., Sadhashivan, A., van
Alphen, D., et al. 2013. Reach out and touch somebody’s virtual hand. Affectively
connected through mediated touch. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 786–791.
Piscataway, NJ: IEEE Computer Society.
Tsalamlal, M.Y., Ouarti, N., Martin, J.C., and Ammi, M. 2014. Haptic
communication of dimensions of emotions using air jet based tactile
stimulation. J. Multimodal User Interfaces 9(1): 69–77. doi:10.1007/s12193-014
-0162-3
Tsetserukou, D. 2010. HaptiHug: a novel haptic display for communication of hug
over a distance. In Haptics: Generating and Perceiving Tangible Sensations, Edited
by A.M. Kappers, J.B. Erp, W.M. Bergmann Tiest, and F.C. Helm, 340–347.
Berlin: Springer.
Tsetserukou, D., and Neviarouskaya, A. 2010. Innovative real-time communication
system with rich emotional and haptic channels. In Haptics: Generating and
Frontiers in Digital Humanities | www.frontiersin.org
Perceiving Tangible Sensations, Edited by A.M. Kappers, J.B. van Erp, W.M.
Bergmann Tiest, and F.C. Helm, 306–313. Berlin: Springer.
Tsihrintzis, G.A., Virvou, M., Alepis, E., and Stathopoulou, I.O. 2008. Towards
improving visual-facial emotion recognition through use of complementary
keyboard-stroke pattern information. In Fifth International Conference on
Information Technology: New Generations (ITNG 2008), 32–37. Piscataway,
NJ: IEEE.
Uvnäs-Moberg, K. 1997. Physiological and endocrine effects of social contact. Ann.
N. Y. Acad. Sci. 807: 146–63. doi:10.1111/j.1749-6632.1997.tb51917.x
Vallbo, A., Olausson, H., Wessberg, J., and Norrsell, U. 1993. A system of unmyelinated afferents for innocuous mechanoreception in the human skin. Brain Res.
628: 301–4. doi:10.1016/0006-8993(93)90968-S
Van Bel, D.T., IJsselsteijn, W.A., and de Kort, Y.A.W. 2008. Interpersonal connectedness: conceptualization and directions for a measurement instrument. In
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
(CHI’08), 3129–3134. New York, NY: ACM.
van Bel, D.T., Smolders, K.C.H.J., IJsselsteijn, W.A., and de Kort, Y.A.W. 2009. Social
connectedness: concept and measurement. In Proceedings of the 5th International
Conference on Intelligent Environments, Edited by V. Callaghan, A. Kameas, A.
Reyes, D. Royo, and M. Weber, 67–74. Amsterdam: IOS Press.
van Erp, J.B.F. 2007. Tactile Displays for Navigation and Orientation: Perception and
Behaviour. Utrecht: Utrecht University.
van Erp, J.B.F. 2012. The ten rules of touch: guidelines for social agents and robots
that can touch. In Proceedings of the 25th Annual Conference on Computer
Animation and Social Agents (CASA 2012), Singapore: Nanayang Technological
University.
van Erp, J.B.F., Kyung, K.-U., Kassner, S., Carter, J., Brewster, S., Weber, G., et al.
2010. Setting the standards for haptic and tactile interactions: ISO’s work. In
Haptics: Generating and Perceiving Tangible Sensations. Proceedings of Eurohaptics 2010, Edited by A.M.L. Kappers, J.B.F. van Erp, W.M. Bergmann Tiest, and
F.C.T. van der Helm, 353–358. Heidelberg: Springer.
van Erp, J.B.F., and Toet, A. 2013. How to touch humans. Guidelines for social agents
and robots that can touch. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 780–785. Geneva:IEEE
Computer Society. doi:10.1109/ACII.2013.77145
van Erp, J.B.F., and van Veen, H.A.H.C. 2004. Vibrotactile in-vehicle navigation
system. Transp. Res. Part F Traffic Psychol. Behav. 7: 247–56. doi:10.1016/j.trf.
2004.09.003
van Wingerden, S., Uebbing, T.J., Jung, M.M., and Poel, M. 2014. A neural network
based approach to social touch classification. In Proceedings of the 2014 Workshop on Emotion Representation and Modelling in Human-Computer-InteractionSystems (ERM4HCI ‘14), 7–12. New York, NY: ACM.
Vinciarelli, A., Pantic, M., Bourlard, H., and Pentland, A. 2008. Social signal processing: state-of-the-art and future perspectives of an emerging domain. In Proceedings of the 16th ACM International Conference on Multimedia, 1061–1070.
New York, NY: ACM.
Vrontou, S., Wong, A.M., Rau, K.K., Koerber, H.R., and Anderson, D.J. 2013.
Genetic identification of C fibres that detect massage-like stroking of hairy skin
in vivo. Nature 493: 669–73. doi:10.1038/nature11810
Wada, K., Ikeda, Y., Inoue, K., and Uehara, R. 2010. Development and preliminary
evaluation of a caregiver’s manual for robot therapy using the therapeutic seal
robot Paro. In Proceedings of the IEEE International Workshop on Robot and
Human Interactive Communication (RO-MAN 2010), 533–538. Piscataway, NJ:
IEEE.
Wada, K., and Shibata, T. 2007. Living with seal robots – its sociopsychological
and physiological influences on the elderly at a care house. IEEE Trans. Rob. 23:
972–80. doi:10.1109/TRO.2007.906261
Walker, S.C., and McGlone, F.P. 2015. Perceived pleasantness of social touch reflects
the anatomical distribution and velocity tuning of C-tactile afferents: an affective
homunculus. In Program No. 339.14/HH22. 2014 Neuroscience Meeting Planner,
Washington, DC: Society for Neuroscience.
Wang, R., and Quek, F. 2010. Touch & talk: contextualizing remote touch for affective interaction. In Proceedings of the 4th International Conference on Tangible,
Embedded, and Embodied Interaction (TEI ‘10), 13–20. New York, NY: ACM.
Wang, R., Quek, F., Tatar, D., Teh, K.S., and Cheok, A.D. 2012. Keep in touch:
channel, expectation and experience. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems CHI ‘12, 139–148. New York, NY: ACM.
13
May 2015 | Volume 2 | Article 2
van Erp and Toet
Social touch in HCI
Whitcher, S.J., and Fisher, J.D. 1979. Multidimensional reaction to therapeutic touch
in a hospital setting. J. Pers. Soc. Psychol. 37: 87–96. doi:10.1037/0022-3514.37.
1.87
Wigram, A.L. 1996. The Effects of Vibroacoustic Therapy on Clinical and NonClinical Populations. Ph.D. thesis, St. George’s Hospital Medical School, London
University, London.
Yohanan, S., Chan, M., Hopkins, J., Sun, H., and MacLean, K. 2005. Hapticat: exploration of affective touch. In Proceedings of the 7th International
Conference on Multimodal Interfaces (ICMI ‘05), 222–229. New York, NY:
ACM.
Yohanan, S., and MacLean, K. 2012. The role of affective touch in human-robot
interaction: human intent and expectations in touching the haptic creature. Int.
J. Soc. Rob. 4: 163–80. doi:10.1007/s12369-011-0126-7
Frontiers in Digital Humanities | www.frontiersin.org
Zacharatos, H., Gatzoulis, C., and Chrysanthou, Y.L. 2014. Automatic emotion
recognition based on body movement analysis: a survey. IEEE CGA 34: 35–45.
doi:10.1109/MCG.2014.106
Conflict of Interest Statement: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be
construed as a potential conflict of interest.
Copyright © 2015 van Erp and Toet. This is an open-access article distributed under
the terms of the Creative Commons Attribution License (CC BY). The use, distribution
or reproduction in other forums is permitted, provided the original author(s) or
licensor are credited and that the original publication in this journal is cited, in
accordance with accepted academic practice. No use, distribution or reproduction is
permitted which does not comply with these terms.
14
May 2015 | Volume 2 | Article 2
|
Only respond with the most direct answer possible. Do not discuss anything else. Use only information from the provided document. | What are the pros and cons? | Michael M.
1.0 out of 5 stars
Don't spend your hard on money on this garbage, you will regret it when it breaks within the
year.
Reviewed in the United States on February 8, 2024
If you want a TV that last and is user friendly DO NOT BUY Hisense. The of the HDMI ports failed
for no reason. The TV sat securely with a PC and a PS5 connected and both ports broke. Hisense
deemed it non-warranty because of a microscopic pinhole in the screen, which they also refused to
cover. The user interface if non-nonsensical, clunky, and awkward. At night I like to turn on the
auto-volume leveling (which isn't even very good, it's still fluctuates wildly in volume compared to
Visio) you will have to navigate through a maze of sub-menu'. During the day I would reverse the
process, it's not a user friendly TV, the menu's don't make sense, most of the settings don't have any
detail description just a cryptic one-word setting.
Bunny S.
1.0 out of 5 stars
Worst customer service ever!!
Purchased an 85 inch from Best Buy. Worked fine for a month or two then the sound started going
out I would have to literally turn the TV off and back on to get the sound they’ve been out twice and
they were a no-show on one appointment finally here it is December and I’ve gotten them to, tell me
the TV cannot be repaired. Still waiting for my money to be returned/refunded. The only person that
was nice was one of the repairman. You call the repair line and it was always a major ordeal. You
cannot speak to a supervisor, the worst experience I’ve ever had , I will never purchase another one
of their products ever again.
JP
1.0 out of 5 stars
Do not buy a TV from Hisense!!
I've had my TV for 2 years. I keep all documents when I buy large appliances because I know that if I
don't and they break I'd be SOL. Called Hisense the day my TV just stopped turning on and although
I sent them pictures of all my documents they said they needed a manager to review my case. Two
days later they texted me saying that my warranty was for the wrong TV and sent me a link to a web
page for a warranty they said was the right one. I texted back asking for someone to call and
explain. No one did. I called several days later and was hung up on, left on hold, and eventually told
they would not honor my warranty. Do not buy this TV. There was no reason for my TV to stop
working. Now I have a 70 inch wall weight.
Tracy L.
2.0 out of 5 stars
Good for a few
The TV works fine for a year or do then it won't work like it should. The apps won't work right or not
at all. The apps that are permanently on the TV you can't uninstall so you can reset the app. It won't
update at all. Over all it's a throw away TV good just to get by for a better one.
Tammy B.
1.0 out of 5 stars
Sucks
There is not one thing that I liked about that TV I thought the price was good but they could give me
another one for free and I wouldn't take it. The worst TV to try and figure out how to navigate I've
ever had and you couldn't get Alexa or The voice Assistant to work on it ever.
Tania A.
5.0 out of 5 stars
works excellent
just to need linking. i love it
Ernie
5.0 out of 5 stars
working great
make sure you set up everything with the same google account or alexa account if you dint you wont
see it on ur device
Jared B.
5.0 out of 5 stars
Easy to setup and works great
I setup my Hisense U75H Google TV to work with Alexa. Turned on and off TV, changed volume,
muted, unmuted. They all worked on the first command. I'm surprised so many people are having so
many issues.
Cole
5.0 out of 5 stars
It does work
after setup. sign out of hisense account then sign back in. it will prompt you to ebavke the tv and it
works great.
Eileen G.
5.0 out of 5 stars
Works, first attempt
This works for the Hisense 55U7G. Not sure about all of the negative reviews without model
numbers. l assume that many are working with models before Alexa enabling. I'm sure that it would
do more with a Fire TV. I'm working with Android TV in my model.
John S.
3.0 out of 5 stars
Log out and then enable skill again
the review below works. have to enable askoll, create hisense accout, then log out of Hisense, go
back to alexa enable akill, and the linking then works when gou log back in. Stupid it cant just work
the first time.
Ricardo
3.0 out of 5 stars
Works Great
The app works 100% with Alexa, I’m able to tell Alexa pretty much a lot of commands from turning
the tv on and off, control volume, etc.
Amazon Customer
3.0 out of 5 stars
More commands added but still lacking
When I first got this skill, it had zero functionality, but over time commands to turn the TV on and off
as well as adjust volume and change both the channel and the various inputs have come into play.
Still missing commands to open up specific apps, which would definitely make this app a must-have
for anyone with a Hisense.
Matt
3.0 out of 5 stars
Missing a major feature
it's a pain to setup, but it works.
So before you lose your mind, Go into network settings and change the "Wake-on-Lan/Wifi" to ON.
(Depends on how it's connected to the internet) and you'll be able to turn it on and off.
Unfortunately it doesn't start any apps. it's just on and off, change input, and control volume.
Juanito D.
3.0 out of 5 stars
It work.. barely halfway
It a little hard to set up, but in the same time is easy, just be ready in the time to setup with you
smarthphone and email ready to follow step by step. I saw many reviews saying the only thing do is
turn off the TV, no turn on, that's not a problem of the skill is just the tv setting. At least in my Hisense
Android TV at factory settings comes with the Wake on Wireless Network off, just turned on in
Network setting and can turn on the TV with Alexa commands. The only thing I can't do is change
the input with commands, in the skill setting tell I can do it but Alexa tell me: Can find any devices
who can do that. But the commands if tell the skill we can do work; can Turn on/off, modify Vol. and
activate apps.
| Only respond with the most direct answer possible. Do not discuss anything else. Use only information from the provided document.
What are the pros and cons?
Michael M.
1.0 out of 5 stars
Don't spend your hard on money on this garbage, you will regret it when it breaks within the
year.
Reviewed in the United States on February 8, 2024
If you want a TV that last and is user friendly DO NOT BUY Hisense. The of the HDMI ports failed
for no reason. The TV sat securely with a PC and a PS5 connected and both ports broke. Hisense
deemed it non-warranty because of a microscopic pinhole in the screen, which they also refused to
cover. The user interface if non-nonsensical, clunky, and awkward. At night I like to turn on the
auto-volume leveling (which isn't even very good, it's still fluctuates wildly in volume compared to
Visio) you will have to navigate through a maze of sub-menu'. During the day I would reverse the
process, it's not a user friendly TV, the menu's don't make sense, most of the settings don't have any
detail description just a cryptic one-word setting.
Bunny S.
1.0 out of 5 stars
Worst customer service ever!!
Purchased an 85 inch from Best Buy. Worked fine for a month or two then the sound started going
out I would have to literally turn the TV off and back on to get the sound they’ve been out twice and
they were a no-show on one appointment finally here it is December and I’ve gotten them to, tell me
the TV cannot be repaired. Still waiting for my money to be returned/refunded. The only person that
was nice was one of the repairman. You call the repair line and it was always a major ordeal. You
cannot speak to a supervisor, the worst experience I’ve ever had , I will never purchase another one
of their products ever again.
JP
1.0 out of 5 stars
Do not buy a TV from Hisense!!
I've had my TV for 2 years. I keep all documents when I buy large appliances because I know that if I
don't and they break I'd be SOL. Called Hisense the day my TV just stopped turning on and although
I sent them pictures of all my documents they said they needed a manager to review my case. Two
days later they texted me saying that my warranty was for the wrong TV and sent me a link to a web
page for a warranty they said was the right one. I texted back asking for someone to call and
explain. No one did. I called several days later and was hung up on, left on hold, and eventually told
they would not honor my warranty. Do not buy this TV. There was no reason for my TV to stop
working. Now I have a 70 inch wall weight.
Tracy L.
2.0 out of 5 stars
Good for a few
The TV works fine for a year or do then it won't work like it should. The apps won't work right or not
at all. The apps that are permanently on the TV you can't uninstall so you can reset the app. It won't
update at all. Over all it's a throw away TV good just to get by for a better one.
Tammy B.
1.0 out of 5 stars
Sucks
There is not one thing that I liked about that TV I thought the price was good but they could give me
another one for free and I wouldn't take it. The worst TV to try and figure out how to navigate I've
ever had and you couldn't get Alexa or The voice Assistant to work on it ever.
Tania A.
5.0 out of 5 stars
works excellent
just to need linking. i love it
Ernie
5.0 out of 5 stars
working great
make sure you set up everything with the same google account or alexa account if you dint you wont
see it on ur device
Jared B.
5.0 out of 5 stars
Easy to setup and works great
I setup my Hisense U75H Google TV to work with Alexa. Turned on and off TV, changed volume,
muted, unmuted. They all worked on the first command. I'm surprised so many people are having so
many issues.
Cole
5.0 out of 5 stars
It does work
after setup. sign out of hisense account then sign back in. it will prompt you to ebavke the tv and it
works great.
Eileen G.
5.0 out of 5 stars
Works, first attempt
This works for the Hisense 55U7G. Not sure about all of the negative reviews without model
numbers. l assume that many are working with models before Alexa enabling. I'm sure that it would
do more with a Fire TV. I'm working with Android TV in my model.
John S.
3.0 out of 5 stars
Log out and then enable skill again
the review below works. have to enable askoll, create hisense accout, then log out of Hisense, go
back to alexa enable akill, and the linking then works when gou log back in. Stupid it cant just work
the first time.
Ricardo
3.0 out of 5 stars
Works Great
The app works 100% with Alexa, I’m able to tell Alexa pretty much a lot of commands from turning
the tv on and off, control volume, etc.
Amazon Customer
3.0 out of 5 stars
More commands added but still lacking
When I first got this skill, it had zero functionality, but over time commands to turn the TV on and off
as well as adjust volume and change both the channel and the various inputs have come into play.
Still missing commands to open up specific apps, which would definitely make this app a must-have
for anyone with a Hisense.
Matt
3.0 out of 5 stars
Missing a major feature
it's a pain to setup, but it works.
So before you lose your mind, Go into network settings and change the "Wake-on-Lan/Wifi" to ON.
(Depends on how it's connected to the internet) and you'll be able to turn it on and off.
Unfortunately it doesn't start any apps. it's just on and off, change input, and control volume.
Juanito D.
3.0 out of 5 stars
It work.. barely halfway
It a little hard to set up, but in the same time is easy, just be ready in the time to setup with you
smarthphone and email ready to follow step by step. I saw many reviews saying the only thing do is
turn off the TV, no turn on, that's not a problem of the skill is just the tv setting. At least in my Hisense
Android TV at factory settings comes with the Wake on Wireless Network off, just turned on in
Network setting and can turn on the TV with Alexa commands. The only thing I can't do is change
the input with commands, in the skill setting tell I can do it but Alexa tell me: Can find any devices
who can do that. But the commands if tell the skill we can do work; can Turn on/off, modify Vol. and
activate apps.
|
Present your answer without any extraneous information. | What is the general customer feedback? | 8 A day age
Failure atter failure.
Switching to mint took ages. You failed to enable one of us to connect at all. Her
money is going to another provider. Texting rarely works or can lake hours to deliver.
Now, my phone has stopped working af all, Wi-Fi works. Wi-Fi calling doesn't. The
other two phones on the plan work. | just gal an email telling me my next 6 months
are free. That would mean something if | actually had service. After all, zero service is
enly werk zero dollars.
Asoon to be @x-customer.
Date of experience: March 7, 2024
pf Usetul = oF Share i=
‘ss Paeply trom Mint Mobile 21 hours aga
Hi, Thank you for contacting Mint Mobile. We're soery for any inconvenience you may
have capenenood with the serviog. This is definitely not the experionce we want you 1b
have when using our service. Since we aro unadle to provide you with assistance
‘through this channel, plaase contact us through our cusbomer support channels at 8O0-
683-7452 or our live chat plationn, 7 days a week from 5am to 7 pm PST, bo tatoo a
look at your case and find the best solution.
(Christy Unruh
irview @ US
2 days age
Hassle free service
The company never bothers me with upsells & my better hall will change service once
his contract is wp. He has had the same number since the earhy 90s.
Anything they might want to tell me abaut is done by my email, which is awesarnell
(Date of eaperonoe: February 06, 2024
eo Useful = of Share i=4
Grace Fong
lroview @ US
BREE 18 hours ago
A Canadian snowbird who wants to use the same phone number again
next year.
Jam a Canadian whe just began to “snewbird” in Hawaii this year - it means | came
hare in the winter for a few mente to agcape the deep frees. Having the Mint mabile
account has been very convenient for doing all kinds of business and shopping. This
lire | gulbseribed for three monihs a¢ | return to Canada April 23.1 will come back to
my apartment in Katbua, Hl again im Jan. 2025. It would be wonderful if | could use the
game phone nurnber nest year Would thal be posgible? Thank you.
(Date of experienom March 07, 2024
pf UWeetul = of Share -
BBE Updated 2 days ago
Had for almost 2 years and for the most...
Had fer almost 2 years and for the mest parl been happy until | go aver data limits.
I'm ok with 4 $0 called “SLOW DOWN" speed when over erry limit, but when it is se
slow info times out it is a totally useless service. Will be switehing te anether service
4% anether data level with Mint Mobile gives me more options with their competition.
| Present your answer without any extraneous information.
What is the general customer feedback?
8 A day age
Failure atter failure.
Switching to mint took ages. You failed to enable one of us to connect at all. Her
money is going to another provider. Texting rarely works or can lake hours to deliver.
Now, my phone has stopped working af all, Wi-Fi works. Wi-Fi calling doesn't. The
other two phones on the plan work. | just gal an email telling me my next 6 months
are free. That would mean something if | actually had service. After all, zero service is
enly werk zero dollars.
Asoon to be @x-customer.
Date of experience: March 7, 2024
pf Usetul = oF Share i=
‘ss Paeply trom Mint Mobile 21 hours aga
Hi, Thank you for contacting Mint Mobile. We're soery for any inconvenience you may
have capenenood with the serviog. This is definitely not the experionce we want you 1b
have when using our service. Since we aro unadle to provide you with assistance
‘through this channel, plaase contact us through our cusbomer support channels at 8O0-
683-7452 or our live chat plationn, 7 days a week from 5am to 7 pm PST, bo tatoo a
look at your case and find the best solution.
(Christy Unruh
irview @ US
2 days age
Hassle free service
The company never bothers me with upsells & my better hall will change service once
his contract is wp. He has had the same number since the earhy 90s.
Anything they might want to tell me abaut is done by my email, which is awesarnell
(Date of eaperonoe: February 06, 2024
eo Useful = of Share i=4
Grace Fong
lroview @ US
BREE 18 hours ago
A Canadian snowbird who wants to use the same phone number again
next year.
Jam a Canadian whe just began to “snewbird” in Hawaii this year - it means | came
hare in the winter for a few mente to agcape the deep frees. Having the Mint mabile
account has been very convenient for doing all kinds of business and shopping. This
lire | gulbseribed for three monihs a¢ | return to Canada April 23.1 will come back to
my apartment in Katbua, Hl again im Jan. 2025. It would be wonderful if | could use the
game phone nurnber nest year Would thal be posgible? Thank you.
(Date of experienom March 07, 2024
pf UWeetul = of Share -
BBE Updated 2 days ago
Had for almost 2 years and for the most...
Had fer almost 2 years and for the mest parl been happy until | go aver data limits.
I'm ok with 4 $0 called “SLOW DOWN" speed when over erry limit, but when it is se
slow info times out it is a totally useless service. Will be switehing te anether service
4% anether data level with Mint Mobile gives me more options with their competition.
|
the model should only respond using information presented in the prompt/context block. the model response should contain at least three bullet points. | Please summarize the three articles mentioned | Legislative History
Congress enacted the UCMJ in 1950, and it entered into force in 1951. At the time of enactment, Article
43 provided that there was no statute of limitations for “desertion or absence without leave in time of war,
or with aiding the enemy, mutiny, or murder” and set a three-year statute of limitations for most other
offenses, including rape. Article 120 provided that a person guilty of rape “shall be punished by death or
such other punishment as a court-martial may direct.” Article 55, then as now, prohibits “[p]unishment by
flogging, or by branding, marking, or tattooing on the body, or any other cruel or unusual punishment.”
Amendments to Article 43
As relevant to Briggs and Collins, Congress has amended Article 43 three times. In 1986, Congress
provided that there was no statute of limitations for “any offense punishable by death” and set a five-year
statute of limitations for most other offenses. These statutes of limitations mirrored their civilian
counterparts, which are codified at 18 U.S.C. § 3281 for offenses punishable by death, and § 3282 for
other offenses. Congress again amended Article 43 in 2003, setting a twenty-five-year statute of
limitations for child abuse offenses, including rape of a child under Article 120. Most recently, Congress
amended Article 43 in the National Defense Authorization Act for Fiscal Year 2006 (2006 NDAA) in a
section titled “Extension of Statute of Limitations for Murder, Rape, and Child Abuse Offenses under the
[UCMJ].” The 2006 amendments provided that there is no statute of limitations for “murder or rape, or
[for] any other offenses punishable by death” and modified the twenty-five-year statute of limitations for
child abuse offenses other than rape. In an accompanying report, the Senate Committee on Armed
Services described the amendments as “clarify[ing] that all murders are included in the class of offenses
that has an unlimited statute of limitations . . . [and] includ[ing] rape in that class of offenses.”
Amendments to Article 120
As with Article 43, Congress has amended Article 120 several times since its enactment. Of note, in the
2006 NDAA, Congress amended Article 120 to specify that a person guilty of rape “shall be punished as a
court-martial may direct,” removing the statutory authority to punish rape by death.
Judicial Interpretations
Before Briggs and Collins, the Supreme Court had not interpreted Articles 43, 55, or 120, but several
lower courts, including the U.S. Court of Appeals for the Armed Forces (CAAF), had done so. Three
cases are particularly relevant to Briggs and Collins. First, in the 1983 case United States v. Matthews, the
CAAF (then known as the U.S. Court of Military Appeals) addressed whether Article 55 protected
servicemembers from cruel and unusual punishment in the same manner as the Eighth Amendment to the
U.S. Constitution. Specifically, the court considered whether the Eighth Amendment’s prohibition on
cruel and unusual punishment barred the imposition of the death penalty on a servicemember found guilty
of rape and murder. The court held that “a servicemember is entitled both by statute [under Article 55]
and under the Eighth Amendment to protection against ‘cruel and unusual punishments.’” It recognized,
however, that, “since in many ways the military community is unique, . . . there may be circumstances
under which the rules governing capital punishment of servicemembers will be different from those
applicable to civilians.”
Congressional Research Service 3
In dicta, the Matthews court further observed that, while “Congress obviously intended that in cases
where an accused servicemember is convicted of . . . rape, the court-martial members should have the
option to adjudge a death sentence,” this intent “[p]robably . . . cannot be constitutionally effectuated in a
case where the rape of an adult female is involved, . . . at least, where there is no purpose unique to the
military mission that would be served by allowing the death penalty for this offense.” The court based its
reasoning on the Supreme Court’s 1977 holding in Coker v. Georgia that “a sentence of death is grossly
disproportionate and excessive punishment for the crime of rape and is therefore forbidden by the Eighth
Amendment as cruel and unusual punishment.”
Second, in the 1998 case Willenbring v. Neurauter, the CAAF held there was no statute of limitations for
rape, relying on the contemporaneous language of Articles 43(a) (“A person charged . . . with any offense
punishable by death, may be tried and punished at any time without limitation”) and 120(a) (any person
“guilty of rape . . . shall be punished by death or such other punishment as a court-martial may direct”).
The CAAF considered its decision in Matthews and the Supreme Court’s decision in Coker but held that
the 1986 amendment to Article 43 “was meant to apply to the most serious offenses without listing each
one in the statute.” The CAAF concluded that rape under Article 120 was an offense that Congress
deemed punishable by death, regardless of whether such a sentence constitutionally could be imposed.
Third, and most recently, in the 2018 case United States v. Mangahas, the CAAF overruled Willenbring.
Relying on Coker, the CAAF reasoned that where “there is no set of circumstances under which the death
penalty could constitutionally be imposed for the rape of an adult woman, that offense is simply not
‘punishable by death.’” Recognizing that “Willenbring gave short shrift to this highly salient point,” the
court overruled its prior decision. The CAAF concluded that because rape was not constitutionally
punishable by death, rapes committed between 1986 and 2006 are subject to a five-year statute of
limitations under Article 43
| System instruction: the model should only respond using information presented in the prompt/context block. the model response should contain at least three bullet points.
context: Legislative History
Congress enacted the UCMJ in 1950, and it entered into force in 1951. At the time of enactment, Article
43 provided that there was no statute of limitations for “desertion or absence without leave in time of war,
or with aiding the enemy, mutiny, or murder” and set a three-year statute of limitations for most other
offenses, including rape. Article 120 provided that a person guilty of rape “shall be punished by death or
such other punishment as a court-martial may direct.” Article 55, then as now, prohibits “[p]unishment by
flogging, or by branding, marking, or tattooing on the body, or any other cruel or unusual punishment.”
Amendments to Article 43
As relevant to Briggs and Collins, Congress has amended Article 43 three times. In 1986, Congress
provided that there was no statute of limitations for “any offense punishable by death” and set a five-year
statute of limitations for most other offenses. These statutes of limitations mirrored their civilian
counterparts, which are codified at 18 U.S.C. § 3281 for offenses punishable by death, and § 3282 for
other offenses. Congress again amended Article 43 in 2003, setting a twenty-five-year statute of
limitations for child abuse offenses, including rape of a child under Article 120. Most recently, Congress
amended Article 43 in the National Defense Authorization Act for Fiscal Year 2006 (2006 NDAA) in a
section titled “Extension of Statute of Limitations for Murder, Rape, and Child Abuse Offenses under the
[UCMJ].” The 2006 amendments provided that there is no statute of limitations for “murder or rape, or
[for] any other offenses punishable by death” and modified the twenty-five-year statute of limitations for
child abuse offenses other than rape. In an accompanying report, the Senate Committee on Armed
Services described the amendments as “clarify[ing] that all murders are included in the class of offenses
that has an unlimited statute of limitations . . . [and] includ[ing] rape in that class of offenses.”
Amendments to Article 120
As with Article 43, Congress has amended Article 120 several times since its enactment. Of note, in the
2006 NDAA, Congress amended Article 120 to specify that a person guilty of rape “shall be punished as a
court-martial may direct,” removing the statutory authority to punish rape by death.
Judicial Interpretations
Before Briggs and Collins, the Supreme Court had not interpreted Articles 43, 55, or 120, but several
lower courts, including the U.S. Court of Appeals for the Armed Forces (CAAF), had done so. Three
cases are particularly relevant to Briggs and Collins. First, in the 1983 case United States v. Matthews, the
CAAF (then known as the U.S. Court of Military Appeals) addressed whether Article 55 protected
servicemembers from cruel and unusual punishment in the same manner as the Eighth Amendment to the
U.S. Constitution. Specifically, the court considered whether the Eighth Amendment’s prohibition on
cruel and unusual punishment barred the imposition of the death penalty on a servicemember found guilty
of rape and murder. The court held that “a servicemember is entitled both by statute [under Article 55]
and under the Eighth Amendment to protection against ‘cruel and unusual punishments.’” It recognized,
however, that, “since in many ways the military community is unique, . . . there may be circumstances
under which the rules governing capital punishment of servicemembers will be different from those
applicable to civilians.”
Congressional Research Service 3
In dicta, the Matthews court further observed that, while “Congress obviously intended that in cases
where an accused servicemember is convicted of . . . rape, the court-martial members should have the
option to adjudge a death sentence,” this intent “[p]robably . . . cannot be constitutionally effectuated in a
case where the rape of an adult female is involved, . . . at least, where there is no purpose unique to the
military mission that would be served by allowing the death penalty for this offense.” The court based its
reasoning on the Supreme Court’s 1977 holding in Coker v. Georgia that “a sentence of death is grossly
disproportionate and excessive punishment for the crime of rape and is therefore forbidden by the Eighth
Amendment as cruel and unusual punishment.”
Second, in the 1998 case Willenbring v. Neurauter, the CAAF held there was no statute of limitations for
rape, relying on the contemporaneous language of Articles 43(a) (“A person charged . . . with any offense
punishable by death, may be tried and punished at any time without limitation”) and 120(a) (any person
“guilty of rape . . . shall be punished by death or such other punishment as a court-martial may direct”).
The CAAF considered its decision in Matthews and the Supreme Court’s decision in Coker but held that
the 1986 amendment to Article 43 “was meant to apply to the most serious offenses without listing each
one in the statute.” The CAAF concluded that rape under Article 120 was an offense that Congress
deemed punishable by death, regardless of whether such a sentence constitutionally could be imposed.
Third, and most recently, in the 2018 case United States v. Mangahas, the CAAF overruled Willenbring.
Relying on Coker, the CAAF reasoned that where “there is no set of circumstances under which the death
penalty could constitutionally be imposed for the rape of an adult woman, that offense is simply not
‘punishable by death.’” Recognizing that “Willenbring gave short shrift to this highly salient point,” the
court overruled its prior decision. The CAAF concluded that because rape was not constitutionally
punishable by death, rapes committed between 1986 and 2006 are subject to a five-year statute of
limitations under Article 43
question: please summarize the three articles mentioned |
Use only information from the following context to answer the eventual question; do not use any information that isn't in the context. Bold any words that both:
1. Are specifically what's being asked about, and
2. Explicitly exist, verbatim, in both the context and the user's question.
For example, if I asked: "What is a hotdog?," then the thing I'm asking about would be a "hotdog," so if the text also explicitly said the word "hotdog," you would bold any instance of the word "hotdog" in your response. | Summarize what this text is saying about machines. | Ambiguity is celebrated in human language. It is a central feature of
literature, poetry, and humor. Ambiguity is anathema to computer language.
An ambiguous computer language is a nonsensical concept because the
predictability of computers is what gives part of their value; imagine a
computer that was asked, “what is 1 and 1” it randomly returned either “two”
or “11”. Although it is debatable whether every contract can be translated into
machine language, many of them can be.73 When lawyers or the programmers
they hire write contracts in code, there is less of a chance for ambiguity than
in natural language if only for the simple fact that artificial language must be
complete and predefined, whereas natural language is infinite.74 That is to say
a person can walk around and verbally recite lines of code and people can at
least understand what he is saying; a machine cannot understand human
language that it is not programmed to understand. All of this is simply to say
that the problem of ambiguity is reduced in the smart contract context.
Finally, all of the usual defenses to formation of a contract also apply
in the realm of smart contracts, although as will be seen later, enforcing the
remedy against a strong smart contract may prove problematic to a court.
Take unconscionability and illegality, for instance. If a vending machine were
to sell alcohol to minors or sell alcohol in a dry jurisdiction, then the contract
could be voided as illegal.75 As will be discussed, the remedies will be either
ex post through legal action or ex ante through regulation. In this instance, the
illegal contract can either be policed through a prohibition on alcoholic
vending machines76 or a system of preclearance where a driver’s license
scanner or some mechanism are required to ensure compliance with age
requirements. Similarly, suppose the vending machine charged $1,000 for a
can of Coke and a court were to find this to be substantively unconscionable.
The remedies would again either be in damages or in policing the use of such
vending machines before the contract could be formed. | Use only information from the following context to answer the eventual question; do not use any information that isn't in the context. Bold any words that both:
1. Are specifically what's being asked about, and
2. Explicitly exist, verbatim, in both the context and the user's question.
For example, if I asked: "What is a hotdog?," then the thing I'm asking about would be a "hotdog," so if the text also explicitly said the word "hotdog," you would bold any instance of the word "hotdog" in your response.
Ambiguity is celebrated in human language. It is a central feature of
literature, poetry, and humor. Ambiguity is anathema to computer language.
An ambiguous computer language is a nonsensical concept because the
predictability of computers is what gives part of their value; imagine a
computer that was asked, “what is 1 and 1” it randomly returned either “two”
or “11”. Although it is debatable whether every contract can be translated into
machine language, many of them can be.73 When lawyers or the programmers
they hire write contracts in code, there is less of a chance for ambiguity than
in natural language if only for the simple fact that artificial language must be
complete and predefined, whereas natural language is infinite.74 That is to say
a person can walk around and verbally recite lines of code and people can at
least understand what he is saying; a machine cannot understand human
language that it is not programmed to understand. All of this is simply to say
that the problem of ambiguity is reduced in the smart contract context.
Finally, all of the usual defenses to formation of a contract also apply
in the realm of smart contracts, although as will be seen later, enforcing the
remedy against a strong smart contract may prove problematic to a court.
Take unconscionability and illegality, for instance. If a vending machine were
to sell alcohol to minors or sell alcohol in a dry jurisdiction, then the contract
could be voided as illegal.75 As will be discussed, the remedies will be either
ex post through legal action or ex ante through regulation. In this instance, the
illegal contract can either be policed through a prohibition on alcoholic
vending machines76 or a system of preclearance where a driver’s license
scanner or some mechanism are required to ensure compliance with age
requirements. Similarly, suppose the vending machine charged $1,000 for a
can of Coke and a court were to find this to be substantively unconscionable.
The remedies would again either be in damages or in policing the use of such
vending machines before the contract could be formed.
Summarize what this text is saying about machines. |
Only use the information from the document within the context block when developing your answer. You mustn't use sources from outside of the context block or previous knowledge.
Include three bullet point lists in your response.
Respond in between 250-500 words. | Please describe one finding for each of the studies mentioned in the document text. | Video Game Play and Real-World Violence We must stop the glorification of violence in our society. This includes the gruesome and grisly video games that are now commonplace. It is too easy today for troubled youth to surround themselves with a culture that celebrates violence. —Donald Trump, U.S. President (2019) The tendency to link violent crimes to the playing of violent video games is so prevalent that a term exists to describe it: “the Grand Theft Fallacy.” As one might guess from the word fallacy, this tendency is not only flawed, it gets matters entirely backward. Countries that consume more video games have lower levels of violent crime than those devoid of this media (Markey and Ferguson 2017). Months when people play violent video games the most tend to be safer than months they play them less (Markey, Markey, and French 2015). Even when violent video games, like Grand Theft Auto, were first released, there tends to be a decrease in violent crimes (Beerthuizen, Weijters, and van der Laan 2017). These findings have been replicated by psychologists, economists, and sociologists at various universities considering numerous other variables (cf. Cunningham, Engelstätter, and Ward 2016; Ward 2011). Most strikingly, these findings are not unique to violent video game play— other forms of violent media have also been linked to decreases in violent crime. Contrary to the fear that violent television poses a threat to our society, violent assaults, rapes, and murders all decrease when people are watching extremely violent television shows (Messner 1986). Even violent movies have been linked to declines in real-world violence. As with violent video games, years in which the most violent films were released saw decreases in violent crime, and crime consistently decreases in the days following the release of popular violent movies (Dahl and DellaVigna 2009; Markey, French, and Markey 2015). Regardless of the type of violent media—games, movies, or television shows—the research is consistent. When society is exposed to violent media, there is a reliable reduction in real-world violence. The reason why violent video game play (and other violent media) seems to reduce crime can be traced back to what criminologists call “routine activity theory” (Felson 1994). The simple notion behind this theory is this: For a violent crime to occur, a perpetrator must be in the same location as the victim, and this location tends to be free of those who would likely prevent the crime. Now, consider how playing many hours of video games may keep these potential criminals and victims entertained and off the streets. Male gamers in the United States spend a total of 468 million hours each month playing video games (Snider 2014). These hours constitute time during which at-risk individuals remain inside their homes, instead of being out on the streets. In this manner, video game play could serve as an effective crime-reduction strategy. No taxpayer money is needed. It naturally targets those individuals who are at the highest risk for committing violence or being victims of violence, and it appears to be working. Video Game Play and Aggression If you shoot somebody in one of these games, you don’t go to jail, you don’t get penalized in some way—you get extra points! This doesn’t mean that your child will go out into the world and shoot someone. But they do use more aggressive language, they do use more aggressive images, they have less ability to control their anger, and they externalize things in these violent ways. It’s absolutely not good. —“Dr. Phil” McGraw, television personality (2005) As illustrated by Dr. Phil’s quote, although some might not think video games cause violent homicides, they are still willing to believe that video game play, especially violent video game play, causes aggressive behaviors like punching others, fighting, or bullying. In this context, aggressive behaviors are actions committed by an individual intending to harm another individual. Although similar to violent behaviors like homicides, aggressive acts tend not to cause such extreme physical harm (Bushman et al. 2016). Much of the research purporting to support the claim that video games cause more minor forms of aggression has done little more than establish associations between self-reports of video game play and self-reports of feelings. Figure 2 provides some examples researchers have used to examine whether video games cause aggression. As we can see from these items, these studies do not examine real acts of aggression. Instead, these questionnaires attempt to measure aggression by using items that assess whether an individual might be “jerky” (that is, believing that to say something nasty about an individual behind his or her back is acceptable), antisocial, (as in, “I feel unsociable”), gossipy (as in “I have spread gossip about people I do not like”), or—oddly enough—conservative (like someone who might say, “Any nation should be ready with a strong military at all times”) (Krahé and Möller 2004; Anderson and Dill 2000; Greitemeyer 2019; Anderson et al. 2004). Thus, the meaning and importance one can draw from such studies are extremely suspect. When video game researchers have conducted experiments, these studies have typically involved one group of participants who play a violent video game and another group who plays a nonviolent video game. After a short play session, participants’ aggressive thoughts or behaviors are assessed. Some researchers who have used this methodology found that individuals who play violent video games are more likely to expose others to loud irritating noises (Bushman and Gibson 2011), report feeling more hostile on a questionnaire (Anderson and Dill 2000), give longer prison sentences to hypothetical criminals (Deselms and Altman 2003), and even give hot sauce to people who do not like spicy food (Yang, Huesmann, and Bushman 2014). Importantly, many other researchers cannot replicate these effects (Kühn et al. 2019). So, even if these various experimental outcomes might be related to disagreeable thoughts, it is extremely questionable how well these responses translate to real-world aggressive behavior such as fighting, hitting, and bullying. Other scholars have raised both methodological and measurement concerns with studies examining aggression. For instance, one popular method, the competitive reaction time task (CRTT), measures how aggressive a person becomes after playing a violent video game by giving the player a chance to “blast” another person with an irritating noise. Specifically, participants are allowed to select both the duration and the intensity level (on a scale of zero to ten) of a white noise burst administered to another person. Unfortunately, no standardized scoring method exists for this measurement of aggression. Some researchers have scored aggression as the sum of the intensity and duration (Bushman and Gibson 2011), the product of the intensity and duration (Bartholow, Sestir, and Davis 2005), the log-transformation of duration, ignoring the intensity (Anderson and Dill 2000), and even the square root of the duration score multiplied by the intensity score (Carnagey and Anderson 2005). Collectively, there are at least 147 different ways researchers have scored this measurement. Given all these permutations, one can make it appear as if video games increase aggression, decrease aggression, or have no effect on aggression even within the same sample (Elson et al. 2014). Looking past such methodical problems, numerous scholars have conducted meta-analyses to try to understand better how big an effect video games have on these mundane aggressive outcomes. On average, only 0.4 percent to 4 percent of the variance in minor forms of aggression can be explained by violent video games (Ferguson 2015b; Hilgard, Engelhardt, and Rouder 2017). Keep in mind that this small effect is in reference to the effect of video game play on aggressive outcomes with minimal repercussions (e.g., giving hot sauce to another person who does not like spicy foods) that often have methodical issues (e.g., the CRTT) and are not direct measurements of real-world aggressive acts. Thus, the extremely small effect sizes linking violent video games to questionable proxy measurements of aggression found in many studies likely constitute an overestimate of any true effect video game play has on real-world aggressive behaviors. | Context: Video Game Play and Real-World Violence We must stop the glorification of violence in our society. This includes the gruesome and grisly video games that are now commonplace. It is too easy today for troubled youth to surround themselves with a culture that celebrates violence. —Donald Trump, U.S. President (2019) The tendency to link violent crimes to the playing of violent video games is so prevalent that a term exists to describe it: “the Grand Theft Fallacy.” As one might guess from the word fallacy, this tendency is not only flawed, it gets matters entirely backward. Countries that consume more video games have lower levels of violent crime than those devoid of this media (Markey and Ferguson 2017). Months when people play violent video games the most tend to be safer than months they play them less (Markey, Markey, and French 2015). Even when violent video games, like Grand Theft Auto, were first released, there tends to be a decrease in violent crimes (Beerthuizen, Weijters, and van der Laan 2017). These findings have been replicated by psychologists, economists, and sociologists at various universities considering numerous other variables (cf. Cunningham, Engelstätter, and Ward 2016; Ward 2011). Most strikingly, these findings are not unique to violent video game play— other forms of violent media have also been linked to decreases in violent crime. Contrary to the fear that violent television poses a threat to our society, violent assaults, rapes, and murders all decrease when people are watching extremely violent television shows (Messner 1986). Even violent movies have been linked to declines in real-world violence. As with violent video games, years in which the most violent films were released saw decreases in violent crime, and crime consistently decreases in the days following the release of popular violent movies (Dahl and DellaVigna 2009; Markey, French, and Markey 2015). Regardless of the type of violent media—games, movies, or television shows—the research is consistent. When society is exposed to violent media, there is a reliable reduction in real-world violence. The reason why violent video game play (and other violent media) seems to reduce crime can be traced back to what criminologists call “routine activity theory” (Felson 1994). The simple notion behind this theory is this: For a violent crime to occur, a perpetrator must be in the same location as the victim, and this location tends to be free of those who would likely prevent the crime. Now, consider how playing many hours of video games may keep these potential criminals and victims entertained and off the streets. Male gamers in the United States spend a total of 468 million hours each month playing video games (Snider 2014). These hours constitute time during which at-risk individuals remain inside their homes, instead of being out on the streets. In this manner, video game play could serve as an effective crime-reduction strategy. No taxpayer money is needed. It naturally targets those individuals who are at the highest risk for committing violence or being victims of violence, and it appears to be working. Video Game Play and Aggression If you shoot somebody in one of these games, you don’t go to jail, you don’t get penalized in some way—you get extra points! This doesn’t mean that your child will go out into the world and shoot someone. But they do use more aggressive language, they do use more aggressive images, they have less ability to control their anger, and they externalize things in these violent ways. It’s absolutely not good. —“Dr. Phil” McGraw, television personality (2005) As illustrated by Dr. Phil’s quote, although some might not think video games cause violent homicides, they are still willing to believe that video game play, especially violent video game play, causes aggressive behaviors like punching others, fighting, or bullying. In this context, aggressive behaviors are actions committed by an individual intending to harm another individual. Although similar to violent behaviors like homicides, aggressive acts tend not to cause such extreme physical harm (Bushman et al. 2016). Much of the research purporting to support the claim that video games cause more minor forms of aggression has done little more than establish associations between self-reports of video game play and self-reports of feelings. Figure 2 provides some examples researchers have used to examine whether video games cause aggression. As we can see from these items, these studies do not examine real acts of aggression. Instead, these questionnaires attempt to measure aggression by using items that assess whether an individual might be “jerky” (that is, believing that to say something nasty about an individual behind his or her back is acceptable), antisocial, (as in, “I feel unsociable”), gossipy (as in “I have spread gossip about people I do not like”), or—oddly enough—conservative (like someone who might say, “Any nation should be ready with a strong military at all times”) (Krahé and Möller 2004; Anderson and Dill 2000; Greitemeyer 2019; Anderson et al. 2004). Thus, the meaning and importance one can draw from such studies are extremely suspect. When video game researchers have conducted experiments, these studies have typically involved one group of participants who play a violent video game and another group who plays a nonviolent video game. After a short play session, participants’ aggressive thoughts or behaviors are assessed. Some researchers who have used this methodology found that individuals who play violent video games are more likely to expose others to loud irritating noises (Bushman and Gibson 2011), report feeling more hostile on a questionnaire (Anderson and Dill 2000), give longer prison sentences to hypothetical criminals (Deselms and Altman 2003), and even give hot sauce to people who do not like spicy food (Yang, Huesmann, and Bushman 2014). Importantly, many other researchers cannot replicate these effects (Kühn et al. 2019). So, even if these various experimental outcomes might be related to disagreeable thoughts, it is extremely questionable how well these responses translate to real-world aggressive behavior such as fighting, hitting, and bullying. Other scholars have raised both methodological and measurement concerns with studies examining aggression. For instance, one popular method, the competitive reaction time task (CRTT), measures how aggressive a person becomes after playing a violent video game by giving the player a chance to “blast” another person with an irritating noise. Specifically, participants are allowed to select both the duration and the intensity level (on a scale of zero to ten) of a white noise burst administered to another person. Unfortunately, no standardized scoring method exists for this measurement of aggression. Some researchers have scored aggression as the sum of the intensity and duration (Bushman and Gibson 2011), the product of the intensity and duration (Bartholow, Sestir, and Davis 2005), the log-transformation of duration, ignoring the intensity (Anderson and Dill 2000), and even the square root of the duration score multiplied by the intensity score (Carnagey and Anderson 2005). Collectively, there are at least 147 different ways researchers have scored this measurement. Given all these permutations, one can make it appear as if video games increase aggression, decrease aggression, or have no effect on aggression even within the same sample (Elson et al. 2014). Looking past such methodical problems, numerous scholars have conducted meta-analyses to try to understand better how big an effect video games have on these mundane aggressive outcomes. On average, only 0.4 percent to 4 percent of the variance in minor forms of aggression can be explained by violent video games (Ferguson 2015b; Hilgard, Engelhardt, and Rouder 2017). Keep in mind that this small effect is in reference to the effect of video game play on aggressive outcomes with minimal repercussions (e.g., giving hot sauce to another person who does not like spicy foods) that often have methodical issues (e.g., the CRTT) and are not direct measurements of real-world aggressive acts. Thus, the extremely small effect sizes linking violent video games to questionable proxy measurements of aggression found in many studies likely constitute an overestimate of any true effect video game play has on real-world aggressive behaviors.
Question: Please describe one finding for each of the studies mentioned in the document text.
System Instructions: Only use the information from the document within the context block when developing your answer. You mustn't use sources from outside of the context block or previous knowledge.
Include three bullet point lists in your response.
Respond in between 250-500 words. |
Your response must only present information that is present in the context block. Ensure that your response is clear and presents all information in an unbiased manner. You may use bulleted lists for organizing your response, but avoid all other markdown formatting. | Discuss the differing perspectives on whether or not the output of AI models should be afforded copyright protection. | Do AI Outputs Enjoy Copyright Protection?
The question of whether or not copyright protection may be afforded to AI outputs—such as images
created by DALL-E or texts created by ChatGPT—likely hinges at least partly on the concept of
“authorship.” The U.S. Constitution authorizes Congress to “secur[e] for limited Times to Authors . . . the
exclusive Right to their . . . Writings.” Based on this authority, the Copyright Act affords copyright
protection to “original works of authorship.” Although the Constitution and Copyright Act do not
explicitly define who (or what) may be an “author,” the U.S. Copyright Office recognizes copyright only
in works “created by a human being.” Courts have likewise declined to extend copyright protection to
nonhuman authors, holding that a monkey who took a series of photos lacked standing to sue under the
Copyright Act; that some human creativity was required to copyright a book purportedly inspired by
celestial beings; and that a living garden could not be copyrighted as it lacked a human author.
A recent lawsuit challenged the human-authorship requirement in the context of works purportedly
“authored” by AI. In June 2022, Stephen Thaler sued the Copyright Office for denying his application to
register a visual artwork that he claims was authored “autonomously” by an AI program called the
Creativity Machine. Dr. Thaler argued that human authorship is not required by the Copyright Act. On
August 18, 2023, a federal district court granted summary judgment in favor of the Copyright Office. The
court held that “human authorship is an essential part of a valid copyright claim,” reasoning that only
human authors need copyright as an incentive to create works. Dr. Thaler has stated that he plans to
appeal the decision.
Assuming that a copyrightable work requires a human author, works created by humans using generative
AI could still be entitled to copyright protection, depending on the nature of human involvement in the
creative process. However, a recent copyright proceeding and subsequent Copyright Registration
Guidance indicate that the Copyright Office is unlikely to find the requisite human authorship where an
AI program generates works in response to text prompts. In September 2022, Kris Kashtanova registered
a copyright for a graphic novel illustrated with images that Midjourney generated in response to text
inputs. In October 2022, the Copyright Office initiated cancellation proceedings, noting that Kashtanova
had not disclosed the use of AI. Kashtanova responded by arguing that the images were made via “a
creative, iterative process.” On February 21, 2023, the Copyright Office determined that the images were
not copyrightable, deciding that Midjourney, rather than Kashtanova, authored the “visual material.” In
March 2023, the Copyright Office released guidance stating that, when AI “determines the expressive
elements of its output, the generated material is not the product of human authorship.”
Some commentators assert that some AI-generated works should receive copyright protection, arguing
that AI programs are like other tools that human beings have used to create copyrighted works. For
example, the Supreme Court has held since the 1884 case Burrow-Giles Lithographic Co. v. Sarony that
photographs can be entitled to copyright protection where the photographer makes decisions regarding
creative elements such as composition, arrangement, and lighting. Generative AI programs might be seen
as a new tool analogous to the camera, as Kashtanova argued.
Other commentators and the Copyright Office dispute the photography analogy and question whether AI
users exercise sufficient creative control for AI to be considered merely a tool. In Kashtanova’s case, the
Copyright Office reasoned that Midjourney was not “a tool that [] Kashtanova controlled and guided to
reach [their] desired image” because it “generates images in an unpredictable way.” The Copyright Office
instead compared the AI user to “a client who hires an artist” and gives that artist only “general
directions.” The office’s March 2023 guidance similarly claims that “users do not exercise ultimate
creative control over how [generative AI] systems interpret prompts and generate materials.” One of
Kashtanova’s lawyers, on the other hand, argues that the Copyright Act does not require such exacting
creative control, noting that certain photographs and modern art incorporate a degree of happenstance.
Some commentators argue that the Copyright Act’s distinction between copyrightable “works” and
noncopyrightable “ideas” supplies another reason that copyright should not protect AI-generated works.
One law professor has suggested that the human user who enters a text prompt into an AI program—for
instance, asking DALL-E “to produce a painting of hedgehogs having a tea party on the beach”—has
“contributed nothing more than an idea” to the finished work. According to this argument, the output
image lacks a human author and cannot be copyrighted.
While the Copyright Office’s actions indicate that it may be challenging to obtain copyright protection for
AI-generated works, the issue remains unsettled. Applicants may file suit in U.S. district court to
challenge the Copyright Office’s final decisions to refuse to register a copyright (as Dr. Thaler did), and it
remains to be seen whether federal courts will agree with all of the office’s decisions. While the
Copyright Office notes that courts sometimes give weight to the office’s experience and expertise in this
field, courts will not necessarily adopt the office’s interpretations of the Copyright Act.
In addition, the Copyright Office’s guidance accepts that works “containing” AI-generated material may
be copyrighted under some circumstances, such as “sufficiently creative” human arrangements or
modifications of AI-generated material or works that combine AI-generated and human-authored material.
The office states that the author may only claim copyright protection “for their own contributions” to such
works, and they must identify and disclaim AI-generated parts of the work if they apply to register their
copyright. In September 2023, for instance, the Copyright Office Review Board affirmed the office’s
refusal to register a copyright for an artwork that was generated by Midjourney and then modified in
various ways by the applicant, since the applicant did not disclaim the AI-generated material. | Your response must only present information that is present in the context block. Ensure that your response is clear and presents all information in an unbiased manner. You may use bulleted lists for organizing your response, but avoid all other markdown formatting. Your response shall address the following user request: Discuss the differing perspectives on whether or not the output of AI models should be afforded copyright protection.
Do AI Outputs Enjoy Copyright Protection?
The question of whether or not copyright protection may be afforded to AI outputs—such as images
created by DALL-E or texts created by ChatGPT—likely hinges at least partly on the concept of
“authorship.” The U.S. Constitution authorizes Congress to “secur[e] for limited Times to Authors . . . the
exclusive Right to their . . . Writings.” Based on this authority, the Copyright Act affords copyright
protection to “original works of authorship.” Although the Constitution and Copyright Act do not
explicitly define who (or what) may be an “author,” the U.S. Copyright Office recognizes copyright only
in works “created by a human being.” Courts have likewise declined to extend copyright protection to
nonhuman authors, holding that a monkey who took a series of photos lacked standing to sue under the
Copyright Act; that some human creativity was required to copyright a book purportedly inspired by
celestial beings; and that a living garden could not be copyrighted as it lacked a human author.
A recent lawsuit challenged the human-authorship requirement in the context of works purportedly
“authored” by AI. In June 2022, Stephen Thaler sued the Copyright Office for denying his application to
register a visual artwork that he claims was authored “autonomously” by an AI program called the
Creativity Machine. Dr. Thaler argued that human authorship is not required by the Copyright Act. On
August 18, 2023, a federal district court granted summary judgment in favor of the Copyright Office. The
court held that “human authorship is an essential part of a valid copyright claim,” reasoning that only
human authors need copyright as an incentive to create works. Dr. Thaler has stated that he plans to
appeal the decision.
Assuming that a copyrightable work requires a human author, works created by humans using generative
AI could still be entitled to copyright protection, depending on the nature of human involvement in the
creative process. However, a recent copyright proceeding and subsequent Copyright Registration
Guidance indicate that the Copyright Office is unlikely to find the requisite human authorship where an
AI program generates works in response to text prompts. In September 2022, Kris Kashtanova registered
a copyright for a graphic novel illustrated with images that Midjourney generated in response to text
inputs. In October 2022, the Copyright Office initiated cancellation proceedings, noting that Kashtanova
had not disclosed the use of AI. Kashtanova responded by arguing that the images were made via “a
creative, iterative process.” On February 21, 2023, the Copyright Office determined that the images were
not copyrightable, deciding that Midjourney, rather than Kashtanova, authored the “visual material.” In
March 2023, the Copyright Office released guidance stating that, when AI “determines the expressive
elements of its output, the generated material is not the product of human authorship.”
Some commentators assert that some AI-generated works should receive copyright protection, arguing
that AI programs are like other tools that human beings have used to create copyrighted works. For
example, the Supreme Court has held since the 1884 case Burrow-Giles Lithographic Co. v. Sarony that
photographs can be entitled to copyright protection where the photographer makes decisions regarding
creative elements such as composition, arrangement, and lighting. Generative AI programs might be seen
as a new tool analogous to the camera, as Kashtanova argued.
Other commentators and the Copyright Office dispute the photography analogy and question whether AI
users exercise sufficient creative control for AI to be considered merely a tool. In Kashtanova’s case, the
Copyright Office reasoned that Midjourney was not “a tool that [] Kashtanova controlled and guided to
reach [their] desired image” because it “generates images in an unpredictable way.” The Copyright Office
instead compared the AI user to “a client who hires an artist” and gives that artist only “general
directions.” The office’s March 2023 guidance similarly claims that “users do not exercise ultimate
creative control over how [generative AI] systems interpret prompts and generate materials.” One of
Kashtanova’s lawyers, on the other hand, argues that the Copyright Act does not require such exacting
creative control, noting that certain photographs and modern art incorporate a degree of happenstance.
Some commentators argue that the Copyright Act’s distinction between copyrightable “works” and
noncopyrightable “ideas” supplies another reason that copyright should not protect AI-generated works.
One law professor has suggested that the human user who enters a text prompt into an AI program—for
instance, asking DALL-E “to produce a painting of hedgehogs having a tea party on the beach”—has
“contributed nothing more than an idea” to the finished work. According to this argument, the output
image lacks a human author and cannot be copyrighted.
While the Copyright Office’s actions indicate that it may be challenging to obtain copyright protection for
AI-generated works, the issue remains unsettled. Applicants may file suit in U.S. district court to
challenge the Copyright Office’s final decisions to refuse to register a copyright (as Dr. Thaler did), and it
remains to be seen whether federal courts will agree with all of the office’s decisions. While the
Copyright Office notes that courts sometimes give weight to the office’s experience and expertise in this
field, courts will not necessarily adopt the office’s interpretations of the Copyright Act.
In addition, the Copyright Office’s guidance accepts that works “containing” AI-generated material may
be copyrighted under some circumstances, such as “sufficiently creative” human arrangements or
modifications of AI-generated material or works that combine AI-generated and human-authored material.
The office states that the author may only claim copyright protection “for their own contributions” to such
works, and they must identify and disclaim AI-generated parts of the work if they apply to register their
copyright. In September 2023, for instance, the Copyright Office Review Board affirmed the office’s
refusal to register a copyright for an artwork that was generated by Midjourney and then modified in
various ways by the applicant, since the applicant did not disclaim the AI-generated material. |
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | Explain the negative effects of salary arbitration in baseball from the point of view of the players. Why might a baseball player be opposed to salary arbitration? | IV. PROBLEMS WITH SALARY ARBITRATION
The requirements laid out by the collective bargaining agreement still leave much room for problems between the players and the teams. The first problem stems from the final offer or high/low format of arbitration procedure. In requiring the arbitrator to chose one amount or the other makes the final offer format unique. The arbitrator cannot reach a compromise between the two parties' offers. Since the arbitrator can only choose one side, many owners feel that this may be the root cause of the increasing salaries in baseball. The owners feel that abolition of salary arbitration is proper because it becomes a "win-win" situation for the players. After salary arbitration, "the players will always come out better than they were before." [EN 24] The issue for the owners is that if they present an amount that is significantly low, the arbitrator will tend to favor the player and choose the higher amount. [EN 25] In order to prevent this from happening, many teams tend to keep their amount submitted higher than they would like to prevent the arbitrator from choosing the higher amount given by players.
However, the counter argument is that the final offer format forces both sides to give a reasonable offer. During the arbitration process, the parties will be more concerned with how much the other side will offer. The parties will also concentrate on making their own offer fairer, so that the arbitrator will select it.
The second issue with salary arbitration is whether the evidence introduced between the two sides can affect the ongoing relationship between the team and the player after the arbitration hearings. According to the CBA criteria for salary arbitration, a team can essentially introduce evidence that may degrade a player and his accomplishments in the arbitration hearings. However, since the player will likely be returning to the same team the following year, the team may tend to hold back sensitive information which may offend the player. An arbitrator from a prominent New York law firm that handles some of the arbitration proceedings for the New York Yankees stated in a phone conversation on March 4, 2002, that most teams tend to hold back degrading and malicious information about some of their players because they are afraid of the repercussions in the following year. For example, many teams will not disclose information in an arbitration hearing about how the team manager, teammates, or members of the organization feel about a certain player. If this information is negative, it will not be a comfortable situation for that player if he remains with the team during the following season.
Some teams are afraid of introducing the degrading and detrimental evidence of a player and his conduct to prevent the player from being offended and taking those feelings of betrayal with him
to the field the following season. The arbitrator gave an example of a player being affected by an arbitration hearing in the National Hockey League ("NHL"). The case involved the owner of the
New York Islanders who went into a salary arbitration hearing with their then goalie. The owner introduced humiliating evidence into the arbitration hearing about that goalie. The goalie felt so
betrayed by his team and the whole process, he refused to return to the Islanders the following season. Thus, the goalie was traded because of his refusal to play directly due to the arbitration
hearings. To avoid an outcome such as this, most professional teams avoid introducing humiliating and degrading evidence of the players that are in salary arbitration in order to keep a
positive ongoing relationship the following season.
The other major problem of salary arbitration in baseball is what happens when either party wins. If the owner wins, the player may feel betrayed. A player may feel that he played well for the past few seasons to deserve a higher salary. By losing the arbitration hearing, the player may avoid playing up to his full potential in the following season due to resentment towards the team. There is also the possibility the player may play even better the following
season with the intention of not returning to his present team. A player may play beyond his potential to impress other teams and will not even consider re-signing with his present team as a free agent. A negative ongoing relationship is severely detrimental to baseball. The game becomes one of politics and business and not one of enjoyment or love for the game. There is also a direct affect on the fans and the economic prosperity of the game. On the flip side, there may be problems with how the player may be treated if he wins the salary arbitration. The owners may feel that the player's salary is too high for his ability. They may chose to reduce his playing time or change where he bats in the line-up, thus affecting his offensive output. In the case of a pitcher, the team may choose to put him in a more mediocre role. This may affect the player's ability to negotiate for a higher salary in the future during free
agency. The integrity of the game is affected by the ongoing relationship between the player and the team after arbitration. | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
Explain the negative effects of salary arbitration in baseball from the point of view of the players. Why might a baseball player be opposed to salary arbitration?
<TEXT>
IV. PROBLEMS WITH SALARY ARBITRATION
The requirements laid out by the collective bargaining agreement still leave much room for problems between the players and the teams. The first problem stems from the final offer or high/low format of arbitration procedure. In requiring the arbitrator to chose one amount or the other makes the final offer format unique. The arbitrator cannot reach a compromise between the two parties' offers. Since the arbitrator can only choose one side, many owners feel that this may be the root cause of the increasing salaries in baseball. The owners feel that abolition of salary arbitration is proper because it becomes a "win-win" situation for the players. After salary arbitration, "the players will always come out better than they were before." [EN 24] The issue for the owners is that if they present an amount that is significantly low, the arbitrator will tend to favor the player and choose the higher amount. [EN 25] In order to prevent this from happening, many teams tend to keep their amount submitted higher than they would like to prevent the arbitrator from choosing the higher amount given by players.
However, the counter argument is that the final offer format forces both sides to give a reasonable offer. During the arbitration process, the parties will be more concerned with how much the other side will offer. The parties will also concentrate on making their own offer fairer, so that the arbitrator will select it.
The second issue with salary arbitration is whether the evidence introduced between the two sides can affect the ongoing relationship between the team and the player after the arbitration hearings. According to the CBA criteria for salary arbitration, a team can essentially introduce evidence that may degrade a player and his accomplishments in the arbitration hearings. However, since the player will likely be returning to the same team the following year, the team may tend to hold back sensitive information which may offend the player. An arbitrator from a prominent New York law firm that handles some of the arbitration proceedings for the New York Yankees stated in a phone conversation on March 4, 2002, that most teams tend to hold back degrading and malicious information about some of their players because they are afraid of the repercussions in the following year. For example, many teams will not disclose information in an arbitration hearing about how the team manager, teammates, or members of the organization feel about a certain player. If this information is negative, it will not be a comfortable situation for that player if he remains with the team during the following season.
Some teams are afraid of introducing the degrading and detrimental evidence of a player and his conduct to prevent the player from being offended and taking those feelings of betrayal with him
to the field the following season. The arbitrator gave an example of a player being affected by an arbitration hearing in the National Hockey League ("NHL"). The case involved the owner of the
New York Islanders who went into a salary arbitration hearing with their then goalie. The owner introduced humiliating evidence into the arbitration hearing about that goalie. The goalie felt so
betrayed by his team and the whole process, he refused to return to the Islanders the following season. Thus, the goalie was traded because of his refusal to play directly due to the arbitration
hearings. To avoid an outcome such as this, most professional teams avoid introducing humiliating and degrading evidence of the players that are in salary arbitration in order to keep a
positive ongoing relationship the following season.
The other major problem of salary arbitration in baseball is what happens when either party wins. If the owner wins, the player may feel betrayed. A player may feel that he played well for the past few seasons to deserve a higher salary. By losing the arbitration hearing, the player may avoid playing up to his full potential in the following season due to resentment towards the team. There is also the possibility the player may play even better the following
season with the intention of not returning to his present team. A player may play beyond his potential to impress other teams and will not even consider re-signing with his present team as a free agent. A negative ongoing relationship is severely detrimental to baseball. The game becomes one of politics and business and not one of enjoyment or love for the game. There is also a direct affect on the fans and the economic prosperity of the game. On the flip side, there may be problems with how the player may be treated if he wins the salary arbitration. The owners may feel that the player's salary is too high for his ability. They may chose to reduce his playing time or change where he bats in the line-up, thus affecting his offensive output. In the case of a pitcher, the team may choose to put him in a more mediocre role. This may affect the player's ability to negotiate for a higher salary in the future during free
agency. The integrity of the game is affected by the ongoing relationship between the player and the team after arbitration.
https://via.library.depaul.edu/cgi/viewcontent.cgi?article=1094&context=jslcp&httpsredir=1&referer= |
Only use the information made available in the prompt to formulate an answer. Do not use any outside sources or prior knowledge. | Summarize the consequences of the mergers described in the text. | On August 28, 2017, Amazon acquired Whole Foods Market, a grocery retailer, for
approximately $13.2 billion.47 After reviewing the proposed acquisition, the FTC determined no
further action was needed at the time.48 Prior to the acquisition, Amazon offered the online
grocery delivery service Amazon Fresh, which launched in 2007,49 and Prime Pantry, which
launched in 2014 and ended in January 2021.50 By acquiring Whole Foods Market, Amazon
obtained brick-and-mortar grocery store locations that it was able to integrate with its online
services.51 For example, shoppers with an Amazon Prime membership52 are eligible for discounts
and free pickup or delivery of Whole Foods Market groceries in selected zip codes,53 and Amazon
Hub Lockers—where consumers can pick up products purchased on Amazon’s website—are
often located in Whole Foods Markets.54
Amazon’s acquisition of Whole Foods Market may have increased competition in the grocery
retail market. Prior to the acquisition, Walmart was the largest grocery retailer, followed by
Kroger.55 Progressive Grocer, a research group, estimates that in 2020, Walmart had the highest
U.S. retail sales of grocery items, followed by Amazon.56 However, Duff & Phelps, a consulting
firm, indicates that Amazon comprises only a small portion of the grocery retail market and that it
serves as “more of a symbolic threat.”57 Nevertheless, other grocery retailers have responded by
implementing changes in response to competitive pressure from Amazon.58
Competitive pressure from Amazon may have incentivized other grocery retailers to start offering
online delivery services. In 2017, the year Amazon acquired Whole Foods, Walmart launched an
online delivery service in selected cities;59 Kroger launched an online delivery service in selected
cities in 2018.60 In 2020, Walmart launched Walmart+,61 a membership delivery service that does
not have a minimum order requirement,62 similar to an Amazon Prime membership. Consumers
may have benefited from food retailers offering their own online delivery services, particularly as
many of these stores offer free delivery on orders over $35. These changes may have also
increased pressure on other online grocery delivery services, such as Instacart, a third-party
service that delivers online groceries from selected stores in selected cities; the service launched
in 2012 and stopped delivering groceries from Whole Foods in 2019.63
Amazon’s acquisition of Whole Foods Market raised concern about its growing dominance in the
retail industry, particularly in e-commerce. According to eMarketer, a market research company,
Amazon had the greatest share of e-commerce sales at 38.7% in 2020; Walmart had the second-
greatest share at 5.3% (Figure 1). The estimate from eMarketer includes all online sales,
including products that Amazon does not offer. The House Subcommittee on Antitrust staff report
finds that by restricting products to those sold on Amazon, a market share of 50% or higher may
be a more credible estimate of Amazon’s share of online sales, and that over 60% of all U.S.
online product searches begin on Amazon.64 Through its acquisition of Whole Foods, Amazon
gained access to additional consumer data, strengthening its bargaining power with suppliers.65 In
addition, Amazon has integrated vertically, such as by offering products under its private label
AmazonBasics and by creating its own delivery system. Amazon has reportedly invested $60
billion since 2014 in its delivery network, including capital leases for warehouses and aircraft; in
2019, it had the fourth-largest share of U.S. package deliveries, behind FedEx, United Parcel
Service, and the U.S. Postal Service.66 By integrating vertically, Amazon may be able to further
strengthen its position in e-commerce; if, for example, it is able to provide faster delivery,67
consumers could benefit even if it becomes more difficult for other companies to compete.
Facebook’s Acquisition of Instagram
Facebook announced that it had reached an agreement to acquire Instagram, a social networking
service (i.e., social media platform), for $1 billion on April 9, 2012.68 The FTC reviewed the
acquisition, and on August 22, 2012, it closed the investigation without taking action.69 On
December 9, 2020, the FTC filed a lawsuit against Facebook, alleging that “Facebook has
maintained its monopoly position by buying up companies that present competitive threats, ” in
addition to imposing restrictive policies against companies it does not acquire.70 A coalition of 46
state attorneys general, led by New York Attorney General Letitia James, filed a parallel lawsuit
against Facebook, also alleging that Facebook acquired companies to eliminate competitive
threats.71 Both lawsuits72 specifically mention Facebook’s acquisitions of Instagram and
WhatsApp, a messaging app for mobile devices.73
Prior to the acquisition, Facebook CEO Mark Zuckerberg stated in an internal email that
“Instagram has become a large and viable competitor to us on mobile photos, which will
increasingly be the future of photos.”74 This statement has been used to support the claim that
Facebook acquired Instagram with the intention of eliminating a potential competitor.
It is unclear how successful Instagram would have been had it not been acquired by Facebook,
illustrating the difficulty of predicting whether a nascent firm could become a viable competitor.
Instagram was a relatively new company when it was acquired,75 and grew rapidly thereafter,
from about 100 million monthly active users (MAUs) in February 2013 to 500 million MAUs in June 2016 and 1 billion MAUs in June 2018.76 As it grew in popularity, Instagram was able to use
Facebook’s resources, such as its advertising services and its infrastructure, which hosts and
processes large amounts of consumer data. These have been key to the profitability of Instagram,
which hosts a wide range of users, including “influencers”—that is, users with a large number of
followers who are paid by sponsors to market certain products.77 It is possible that without the
merger, Instagram would have been among the platforms that have struggled to compete in digital
markets because of resource constraints. This occurred with the social networking service
Friendster, which turned down a $30 million buyout offer from Google in 2003 but then struggled
with technical difficulties as its user base grew; users left the platform for other social media
sites, and Friendster eventually closed down.78
Another complication in evaluating the effect of Facebook’s acquisition of Instagram is
determining how the market should be defined, particularly in digital markets that can quickly
evolve. Social networking services can include a wide range of platforms. When Facebook
acquired Instagram in 2012, one of the defining features of social networking services—a
category that than included Friendster and Myspace, among others—was the networks users
could create. Users could clearly indicate the users in their respective network(s) on the social
networking service,79 although some may have chosen to keep their network(s) private. At that
time, Instagram was described as a photo-sharing app, arguably competing with apps like
Photobucket and Flickr, rather than with Facebook.
Additional types of platforms can be considered social networking services: Reddit allows users
to create communities based on their interests; LinkedIn allows users to create connections for
business and employment opportunities; and TikTok allows users to share short-form videos.80
Some of these platforms allow users to connect with any other user on the platform rather than
only with users in their personal network, focusing on the content rather than the user. These
changes suggest that a user’s ability to create social networks may no longer be the defining
feature of social networking services.
In addition, social networking services are not necessarily substitutes for one another. For
example, although Instagram and Microsoft’s LinkedIn are both typically viewed as social
networking services, it is unlikely that users would substitute one platform for the other. One
report estimates that internet users had an average of about seven social media accounts,
suggesting that some users rely on different social media platforms for different purposes.81 | On August 28, 2017, Amazon acquired Whole Foods Market, a grocery retailer, for
approximately $13.2 billion.47 After reviewing the proposed acquisition, the FTC determined no
further action was needed at the time.48 Prior to the acquisition, Amazon offered the online
grocery delivery service Amazon Fresh, which launched in 2007,49 and Prime Pantry, which
launched in 2014 and ended in January 2021.50 By acquiring Whole Foods Market, Amazon
obtained brick-and-mortar grocery store locations that it was able to integrate with its online
services.51 For example, shoppers with an Amazon Prime membership52 are eligible for discounts
and free pickup or delivery of Whole Foods Market groceries in selected zip codes,53 and Amazon
Hub Lockers—where consumers can pick up products purchased on Amazon’s website—are
often located in Whole Foods Markets.54
Amazon’s acquisition of Whole Foods Market may have increased competition in the grocery
retail market. Prior to the acquisition, Walmart was the largest grocery retailer, followed by
Kroger.55 Progressive Grocer, a research group, estimates that in 2020, Walmart had the highest
U.S. retail sales of grocery items, followed by Amazon.56 However, Duff & Phelps, a consulting
firm, indicates that Amazon comprises only a small portion of the grocery retail market and that it
serves as “more of a symbolic threat.”57 Nevertheless, other grocery retailers have responded by
implementing changes in response to competitive pressure from Amazon.58
Competitive pressure from Amazon may have incentivized other grocery retailers to start offering
online delivery services. In 2017, the year Amazon acquired Whole Foods, Walmart launched an
online delivery service in selected cities;59 Kroger launched an online delivery service in selected
cities in 2018.60 In 2020, Walmart launched Walmart+,61 a membership delivery service that does
not have a minimum order requirement,62 similar to an Amazon Prime membership. Consumers
may have benefited from food retailers offering their own online delivery services, particularly as
many of these stores offer free delivery on orders over $35. These changes may have also
increased pressure on other online grocery delivery services, such as Instacart, a third-party
service that delivers online groceries from selected stores in selected cities; the service launched
in 2012 and stopped delivering groceries from Whole Foods in 2019.63
Amazon’s acquisition of Whole Foods Market raised concern about its growing dominance in the
retail industry, particularly in e-commerce. According to eMarketer, a market research company,
Amazon had the greatest share of e-commerce sales at 38.7% in 2020; Walmart had the second-
greatest share at 5.3% (Figure 1). The estimate from eMarketer includes all online sales,
including products that Amazon does not offer. The House Subcommittee on Antitrust staff report
finds that by restricting products to those sold on Amazon, a market share of 50% or higher may
be a more credible estimate of Amazon’s share of online sales, and that over 60% of all U.S.
online product searches begin on Amazon.64 Through its acquisition of Whole Foods, Amazon
gained access to additional consumer data, strengthening its bargaining power with suppliers.65 In
addition, Amazon has integrated vertically, such as by offering products under its private label
AmazonBasics and by creating its own delivery system. Amazon has reportedly invested $60
billion since 2014 in its delivery network, including capital leases for warehouses and aircraft; in
2019, it had the fourth-largest share of U.S. package deliveries, behind FedEx, United Parcel
Service, and the U.S. Postal Service.66 By integrating vertically, Amazon may be able to further
strengthen its position in e-commerce; if, for example, it is able to provide faster delivery,67
consumers could benefit even if it becomes more difficult for other companies to compete.
Facebook’s Acquisition of Instagram
Facebook announced that it had reached an agreement to acquire Instagram, a social networking
service (i.e., social media platform), for $1 billion on April 9, 2012.68 The FTC reviewed the
acquisition, and on August 22, 2012, it closed the investigation without taking action.69 On
December 9, 2020, the FTC filed a lawsuit against Facebook, alleging that “Facebook has
maintained its monopoly position by buying up companies that present competitive threats, ” in
addition to imposing restrictive policies against companies it does not acquire.70 A coalition of 46
state attorneys general, led by New York Attorney General Letitia James, filed a parallel lawsuit
against Facebook, also alleging that Facebook acquired companies to eliminate competitive
threats.71 Both lawsuits72 specifically mention Facebook’s acquisitions of Instagram and
WhatsApp, a messaging app for mobile devices.73
Prior to the acquisition, Facebook CEO Mark Zuckerberg stated in an internal email that
“Instagram has become a large and viable competitor to us on mobile photos, which will
increasingly be the future of photos.”74 This statement has been used to support the claim that
Facebook acquired Instagram with the intention of eliminating a potential competitor.
It is unclear how successful Instagram would have been had it not been acquired by Facebook,
illustrating the difficulty of predicting whether a nascent firm could become a viable competitor.
Instagram was a relatively new company when it was acquired,75 and grew rapidly thereafter,
from about 100 million monthly active users (MAUs) in February 2013 to 500 million MAUs in June 2016 and 1 billion MAUs in June 2018.76 As it grew in popularity, Instagram was able to use
Facebook’s resources, such as its advertising services and its infrastructure, which hosts and
processes large amounts of consumer data. These have been key to the profitability of Instagram,
which hosts a wide range of users, including “influencers”—that is, users with a large number of
followers who are paid by sponsors to market certain products.77 It is possible that without the
merger, Instagram would have been among the platforms that have struggled to compete in digital
markets because of resource constraints. This occurred with the social networking service
Friendster, which turned down a $30 million buyout offer from Google in 2003 but then struggled
with technical difficulties as its user base grew; users left the platform for other social media
sites, and Friendster eventually closed down.78
Another complication in evaluating the effect of Facebook’s acquisition of Instagram is
determining how the market should be defined, particularly in digital markets that can quickly
evolve. Social networking services can include a wide range of platforms. When Facebook
acquired Instagram in 2012, one of the defining features of social networking services—a
category that than included Friendster and Myspace, among others—was the networks users
could create. Users could clearly indicate the users in their respective network(s) on the social
networking service,79 although some may have chosen to keep their network(s) private. At that
time, Instagram was described as a photo-sharing app, arguably competing with apps like
Photobucket and Flickr, rather than with Facebook.
Additional types of platforms can be considered social networking services: Reddit allows users
to create communities based on their interests; LinkedIn allows users to create connections for
business and employment opportunities; and TikTok allows users to share short-form videos.80
Some of these platforms allow users to connect with any other user on the platform rather than
only with users in their personal network, focusing on the content rather than the user. These
changes suggest that a user’s ability to create social networks may no longer be the defining
feature of social networking services.
In addition, social networking services are not necessarily substitutes for one another. For
example, although Instagram and Microsoft’s LinkedIn are both typically viewed as social
networking services, it is unlikely that users would substitute one platform for the other. One
report estimates that internet users had an average of about seven social media accounts,
suggesting that some users rely on different social media platforms for different purposes.81
Summarize the consequences of the mergers described in the text. Only use the information made available in the prompt to formulate an answer. Do not use any outside sources or prior knowledge. |
Respond only with information drawn from the text. Use bullet points to format your response. | According to the context, what's the difference between a migraine and a cluster headache? | PART 1. THE PRIMARY HEADACHES
1. Migraine
1.1 Migraine without aura
A. At least five attacks fulfilling criteria B-D
B. Headache attacks lasting 4-72 hours (when untreated or unsuccessfully treated)
C. Headache has at least two of the following four characteristics: 1. unilateral location 2. pulsating quality 3. moderate or severe pain intensity 4. aggravation by or causing avoidance of routine physical activity (eg, walking or climbing stairs)
D. During headache at least one of the following: 1. nausea and/or vomiting 2. photophobia and phonophobia
E. Not better accounted for by another ICHD-3 diagnosis.
1.2 Migraine with aura
A. At least two attacks fulfilling criteria B and C
B. One or more of the following fully reversible aura symptoms: 1. visual 2. sensory 3. speech and/or language 4. motor 5. brainstem 6. retinal
C. At least three of the following six characteristics: 1. at least one aura symptom spreads gradually over ≥5 minutes 2. two or more aura symptoms occur in succession 3. each individual aura symptom lasts 5-60 minutes 4. at least one aura symptom is unilateral 5. at least one aura symptom is positive 6. the aura is accompanied, or followed within 60 minutes, by headache
D. Not better accounted for by another ICHD-3 diagnosis.
1.2.1 Migraine with typical aura
A. Attacks fulfilling criteria for 1.2 Migraine with aura and criterion B below
B. Aura with both of the following: 1. fully reversible visual, sensory and/or speech/language symptoms 2. no motor, brainstem or retinal symptoms.
1.2.1.1 Typical aura with headache
A. Attacks fulfilling criteria for 1.2.1 Migraine with typical aura and criterion B below
B. Headache, with or without migraine characteristics, accompanies or follows the aura within 60 minutes.
1.2.1.2 Typical aura without headache
A. Attacks fulfilling criteria for 1.2.1 Migraine with typical aura and criterion B below
B. No headache accompanies or follows the aura within 60 minutes.
1.2.2 Migraine with brainstem aura
A. Attacks fulfilling criteria for 1.2 Migraine with aura and criterion B below
B. Aura with both of the following: 1. at least two of the following fully reversible brainstem symptoms: a) dysarthria b) vertigo c) tinnitus d) hypacusis e) diplopia f) ataxia not attributable to sensory deficit g) decreased level of consciousness (GCS ≤13) 2. no motor or retinal symptoms.
1.2.3 Hemiplegic migraine
A. Attacks fulfilling criteria for 1.2 Migraine with aura and criterion B below
B. Aura consisting of both of the following: 1. fully reversible motor weakness 2. fully reversible visual, sensory and/or speech/language symptoms.
1.2.3.1 Familial hemiplegic migraine
A. Attacks fulfilling criteria for 1.2.3 Hemiplegic migraine
B. At least one first- or second-degree relative has had attacks fulfilling criteria for 1.2.3 Hemiplegic migraine.
1.3 Chronic migraine
A. Headache (migraine-like or tension-type-like) on ≥15 days/month for >3 months, and fulfilling criteria B and C
B. Occurring in a patient who has had at least five attacks fulfilling criteria B-D for 1.1 Migraine without aura and/or criteria B and C for 1.2 Migraine with aura
C. On ≥8 days/month for >3 months, fulfilling any of the following: 1. criteria C and D for 1.1 Migraine without aura 2. criteria B and C for 1.2 Migraine with aura 3. believed by the patient to be migraine at onset and relieved by a triptan or ergot derivative
D. Not better accounted for by another ICHD-3 diagnosis.
2. Tension-type headache (TTH)
2.1 Infrequent episodic TTH
A. At least 10 episodes of headache occurring on <1 day/month on average (<12 days/year) and fulfilling criteria B-D
B. Lasting from 30 minutes to 7 days
C. At least two of the following four characteristics: 1. bilateral location 2. pressing or tightening (non-pulsating) quality 3. mild or moderate intensity 4. not aggravated by routine physical activity such as walking or climbing stairs
D. Both of the following: 1. no nausea or vomiting 2. no more than one of photophobia or phonophobia
E. Not better accounted for by another ICHD-3 diagnosis.
2.2 Frequent episodic TTH
As 2.1 except:
A. At least 10 episodes of headache occurring on 1-14 days/month on average for >3 months (12 and <180 days/year) and fulfilling criteria B-D.
2.3 Chronic TTH As 2.1 except:
A. Headache occurring on 15 days/month on average for >3 months (180 days/year), fulfilling criteria B-D
B. Lasting hours to days, or unremitting
D. Both of the following: 1. no more than one of photophobia, phonophobia or mild nausea 2. neither moderate or severe nausea nor vomiting
3. Trigeminal autonomic cephalalgias
3.1 Cluster headache
A. At least five attacks fulfilling criteria B-D
B. Severe or very severe unilateral orbital, supraorbital and/or temporal pain lasting 15-180 minutes (when untreated)
C. Either or both of the following: 1. at least one of the following symptoms or signs, ipsilateral to the headache: a) conjunctival injection and/or lacrimation b) nasal congestion and/or rhinorrhoea c) eyelid oedema d) forehead and facial sweating e) miosis and/or ptosis 2. a sense of restlessness or agitation
D. Occurring with a frequency between one every other day and 8 per day E. Not better accounted for by another ICHD-3 diagnosis.
3.1.1 Episodic cluster headache
A. Attacks fulfilling criteria for 3.1 Cluster headache and occurring in bouts (cluster periods) B. At least two cluster periods lasting from 7 days to 1 year (when untreated) and separated by pain-free remission periods of ≥3 months.
3.1.2 Chronic cluster headache
A. Attacks fulfilling criteria for 3.1 Cluster headache, and criterion B below
B. Occurring without a remission period, or with remissions lasting <3 months, for at least 1 year.
3.4 Hemicrania continua
A. Unilateral headache fulfilling criteria B-D
B. Present for >3 months, with exacerbations of moderate or greater intensity
C. Either or both of the following: 1. at least one of the following symptoms or signs, ipsilateral to the headache: a) conjunctival injection and/or lacrimation b) nasal congestion and/or rhinorrhoea c) eyelid oedema d) forehead and facial sweating e) miosis and/or ptosis 2. a sense of restlessness or agitation, or aggravation of the pain by movement
D. Responds absolutely to therapeutic doses of indomethacin
E. Not better accounted for by another ICHD-3 diagnosis.
4. Other primary headache disorders
4.3 Primary headache associated with sexual activity
A. At least two episodes of pain in the head and/or neck fulfilling criteria B-D
B. Brought on by and occurring only during sexual activity
C. Either or both of the following: 1. increasing in intensity with increasing sexual excitement 2. abrupt explosive intensity just before or with orgasm
D. Lasting from 1 minute to 24 hours with severe intensity and/or up to 72 hours with mild intensity
E. Not better accounted for by another ICHD-3 diagnosis.
4.5 Cold-stimulus headache
4.5.1 Headache attributed to ingestion or inhalation of a cold stimulus
A. At least two episodes of acute frontal or temporal headache fulfilling criteria B and C
B. Brought on by and occurring immediately after a cold stimulus to the palate and/or posterior pharyngeal wall from ingestion of cold food or drink or inhalation of cold air
C. Resolving within 10 minutes after removal of the cold stimulus
D. Not better accounted for by another ICHD-3 diagnosis.
4.7 Primary stabbing headache
A. Head pain occurring spontaneously as a single stab or series of stabs and fulfilling criteria B and C
B. Each stab lasts for up to a few seconds
C. Stabs recur with irregular frequency, from one to many per day
D. No cranial autonomic symptoms
E. Not better accounted for by another ICHD-3 diagnosis.
4.8 Nummular headache
A. Continuous or intermittent head pain fulfilling criterion B
B. Felt exclusively in an area of the scalp, with all of the following four characteristics: 1. sharply-contoured 2. fixed in size and shape 3. round or elliptical 4. 1-6 cm in diameter
C. Not better accounted for by another ICHD-3 diagnosis.
4.9 Hypnic headache
A. Recurrent headache attacks fulfilling criteria B-D
B. Developing only during sleep, and causing wakening
C. Occurring on ≥10 days/month for >3 months
D. Lasting from 15 minutes up to 4 hours after waking
E. No cranial autonomic symptoms or restlessness
F. Not better accounted for by another ICHD-3 diagnosis.
4.10 New daily persistent headache (NDPH)
A. Persistent headache fulfilling criteria B and C
B. Distinct and clearly-remembered onset, with pain becoming continuous and unremitting within 24 hours
C. Present for >3 months
D. Not better accounted for by another ICHD-3 diagnosis. | Question: According to the context, what's the difference between a migraine and a cluster headache?
System Instructions: Respond only with information drawn from the text. Use bullet points to format your response.
Context:
The International Classification of Headache Disorders 3rd Edition (ICHD-3)
PART 1. THE PRIMARY HEADACHES
1. Migraine
1.1 Migraine without aura
A. At least five attacks fulfilling criteria B-D
B. Headache attacks lasting 4-72 hours (when untreated or unsuccessfully treated)
C. Headache has at least two of the following four characteristics: 1. unilateral location 2. pulsating quality 3. moderate or severe pain intensity 4. aggravation by or causing avoidance of routine physical activity (eg, walking or climbing stairs)
D. During headache at least one of the following: 1. nausea and/or vomiting 2. photophobia and phonophobia
E. Not better accounted for by another ICHD-3 diagnosis.
1.2 Migraine with aura
A. At least two attacks fulfilling criteria B and C
B. One or more of the following fully reversible aura symptoms: 1. visual 2. sensory 3. speech and/or language 4. motor 5. brainstem 6. retinal
C. At least three of the following six characteristics: 1. at least one aura symptom spreads gradually over ≥5 minutes 2. two or more aura symptoms occur in succession 3. each individual aura symptom lasts 5-60 minutes 4. at least one aura symptom is unilateral 5. at least one aura symptom is positive 6. the aura is accompanied, or followed within 60 minutes, by headache
D. Not better accounted for by another ICHD-3 diagnosis.
1.2.1 Migraine with typical aura
A. Attacks fulfilling criteria for 1.2 Migraine with aura and criterion B below
B. Aura with both of the following: 1. fully reversible visual, sensory and/or speech/language symptoms 2. no motor, brainstem or retinal symptoms.
1.2.1.1 Typical aura with headache
A. Attacks fulfilling criteria for 1.2.1 Migraine with typical aura and criterion B below
B. Headache, with or without migraine characteristics, accompanies or follows the aura within 60 minutes.
1.2.1.2 Typical aura without headache
A. Attacks fulfilling criteria for 1.2.1 Migraine with typical aura and criterion B below
B. No headache accompanies or follows the aura within 60 minutes.
1.2.2 Migraine with brainstem aura
A. Attacks fulfilling criteria for 1.2 Migraine with aura and criterion B below
B. Aura with both of the following: 1. at least two of the following fully reversible brainstem symptoms: a) dysarthria b) vertigo c) tinnitus d) hypacusis e) diplopia f) ataxia not attributable to sensory deficit g) decreased level of consciousness (GCS ≤13) 2. no motor or retinal symptoms.
1.2.3 Hemiplegic migraine
A. Attacks fulfilling criteria for 1.2 Migraine with aura and criterion B below
B. Aura consisting of both of the following: 1. fully reversible motor weakness 2. fully reversible visual, sensory and/or speech/language symptoms.
1.2.3.1 Familial hemiplegic migraine
A. Attacks fulfilling criteria for 1.2.3 Hemiplegic migraine
B. At least one first- or second-degree relative has had attacks fulfilling criteria for 1.2.3 Hemiplegic migraine.
1.3 Chronic migraine
A. Headache (migraine-like or tension-type-like) on ≥15 days/month for >3 months, and fulfilling criteria B and C
B. Occurring in a patient who has had at least five attacks fulfilling criteria B-D for 1.1 Migraine without aura and/or criteria B and C for 1.2 Migraine with aura
C. On ≥8 days/month for >3 months, fulfilling any of the following: 1. criteria C and D for 1.1 Migraine without aura 2. criteria B and C for 1.2 Migraine with aura 3. believed by the patient to be migraine at onset and relieved by a triptan or ergot derivative
D. Not better accounted for by another ICHD-3 diagnosis.
2. Tension-type headache (TTH)
2.1 Infrequent episodic TTH
A. At least 10 episodes of headache occurring on <1 day/month on average (<12 days/year) and fulfilling criteria B-D
B. Lasting from 30 minutes to 7 days
C. At least two of the following four characteristics: 1. bilateral location 2. pressing or tightening (non-pulsating) quality 3. mild or moderate intensity 4. not aggravated by routine physical activity such as walking or climbing stairs
D. Both of the following: 1. no nausea or vomiting 2. no more than one of photophobia or phonophobia
E. Not better accounted for by another ICHD-3 diagnosis.
2.2 Frequent episodic TTH
As 2.1 except:
A. At least 10 episodes of headache occurring on 1-14 days/month on average for >3 months (12 and <180 days/year) and fulfilling criteria B-D.
2.3 Chronic TTH As 2.1 except:
A. Headache occurring on 15 days/month on average for >3 months (180 days/year), fulfilling criteria B-D
B. Lasting hours to days, or unremitting
D. Both of the following: 1. no more than one of photophobia, phonophobia or mild nausea 2. neither moderate or severe nausea nor vomiting
3. Trigeminal autonomic cephalalgias
3.1 Cluster headache
A. At least five attacks fulfilling criteria B-D
B. Severe or very severe unilateral orbital, supraorbital and/or temporal pain lasting 15-180 minutes (when untreated)
C. Either or both of the following: 1. at least one of the following symptoms or signs, ipsilateral to the headache: a) conjunctival injection and/or lacrimation b) nasal congestion and/or rhinorrhoea c) eyelid oedema d) forehead and facial sweating e) miosis and/or ptosis 2. a sense of restlessness or agitation
D. Occurring with a frequency between one every other day and 8 per day E. Not better accounted for by another ICHD-3 diagnosis.
3.1.1 Episodic cluster headache
A. Attacks fulfilling criteria for 3.1 Cluster headache and occurring in bouts (cluster periods) B. At least two cluster periods lasting from 7 days to 1 year (when untreated) and separated by pain-free remission periods of ≥3 months.
3.1.2 Chronic cluster headache
A. Attacks fulfilling criteria for 3.1 Cluster headache, and criterion B below
B. Occurring without a remission period, or with remissions lasting <3 months, for at least 1 year.
3.4 Hemicrania continua
A. Unilateral headache fulfilling criteria B-D
B. Present for >3 months, with exacerbations of moderate or greater intensity
C. Either or both of the following: 1. at least one of the following symptoms or signs, ipsilateral to the headache: a) conjunctival injection and/or lacrimation b) nasal congestion and/or rhinorrhoea c) eyelid oedema d) forehead and facial sweating e) miosis and/or ptosis 2. a sense of restlessness or agitation, or aggravation of the pain by movement
D. Responds absolutely to therapeutic doses of indomethacin
E. Not better accounted for by another ICHD-3 diagnosis.
4. Other primary headache disorders
4.3 Primary headache associated with sexual activity
A. At least two episodes of pain in the head and/or neck fulfilling criteria B-D
B. Brought on by and occurring only during sexual activity
C. Either or both of the following: 1. increasing in intensity with increasing sexual excitement 2. abrupt explosive intensity just before or with orgasm
D. Lasting from 1 minute to 24 hours with severe intensity and/or up to 72 hours with mild intensity
E. Not better accounted for by another ICHD-3 diagnosis.
4.5 Cold-stimulus headache
4.5.1 Headache attributed to ingestion or inhalation of a cold stimulus
A. At least two episodes of acute frontal or temporal headache fulfilling criteria B and C
B. Brought on by and occurring immediately after a cold stimulus to the palate and/or posterior pharyngeal wall from ingestion of cold food or drink or inhalation of cold air
C. Resolving within 10 minutes after removal of the cold stimulus
D. Not better accounted for by another ICHD-3 diagnosis.
4.7 Primary stabbing headache
A. Head pain occurring spontaneously as a single stab or series of stabs and fulfilling criteria B and C
B. Each stab lasts for up to a few seconds
C. Stabs recur with irregular frequency, from one to many per day
D. No cranial autonomic symptoms
E. Not better accounted for by another ICHD-3 diagnosis.
4.8 Nummular headache
A. Continuous or intermittent head pain fulfilling criterion B
B. Felt exclusively in an area of the scalp, with all of the following four characteristics: 1. sharply-contoured 2. fixed in size and shape 3. round or elliptical 4. 1-6 cm in diameter
C. Not better accounted for by another ICHD-3 diagnosis.
4.9 Hypnic headache
A. Recurrent headache attacks fulfilling criteria B-D
B. Developing only during sleep, and causing wakening
C. Occurring on ≥10 days/month for >3 months
D. Lasting from 15 minutes up to 4 hours after waking
E. No cranial autonomic symptoms or restlessness
F. Not better accounted for by another ICHD-3 diagnosis.
4.10 New daily persistent headache (NDPH)
A. Persistent headache fulfilling criteria B and C
B. Distinct and clearly-remembered onset, with pain becoming continuous and unremitting within 24 hours
C. Present for >3 months
D. Not better accounted for by another ICHD-3 diagnosis. |
Use only the information provided above to answer the question. Answer in paragraph form and keep your answer to under 150 words. | What happens if a home title is listed as Joint Tenants with Rights of Survivorship when one of the owners sells their share of the property to someone else? | To create a joint tenancy, be sure to get the right legal words on the deed or title document.
Joint tenancy with the right of survivorship is a popular way to avoid probate. It certainly has the virtue of simplicity. To create a joint tenancy with the right of survivorship, all you need to do is put the right words on the title document, such as a deed to real estate, a car's title slip, or the signature card establishing a bank account.
What exactly is a joint tenancy with right of survivorship (often shortened simply to "joint tenancy")? It's a co-ownership method that comes with the right to take a deceased co-owner's share of the property. If you co-own a piece of property with someone as joint tenants with the right of survivorship, when your co-owner dies, you automatically own their half of the property, and vice versa. (Contrast joint tenancy with a tenancy in common.)
While many use "joint tenancy" interchangeably with "joint tenancy with right of survivorship," and we do so as well in this article, be aware that a few states (such as Texas) have different norms. In situations where you want to be absolutely clear, be sure to include "with right of survivorship."
In the great majority of states, if you and your co-owners own property as "joint tenants with the right of survivorship" or put the abbreviation "JT WROS" after your names on the title document, you not only co-own the property, but you own it in a way that automatically determines who will own it when one of you dies.
A car salesman or bank staffer may assure you that other words are enough. For example, connecting the names of the owners with the word "or," not "and," does create a joint tenancy, in some circumstances, in some states. But it's always better to unambiguously spell out what you want: joint tenancy with right of survivorship.
When Ken and his wife, Janelle, buy a house, they want to take title in joint tenancy. When the deed that transfers the house to them is prepared, all they need to do is tell the title company to identify them on the deed in this way:
Kenneth J. Hartman and Janelle M. Grubcek, as joint tenants with right of survivorship.
There should be no extra cost or paperwork.
Joint tenancy—or a form of ownership that achieves the same probate-avoiding result—is available in all states, although a few impose restrictions, such as the ones summarized below. In addition, one rule applies in every state except Colorado, Connecticut, North Carolina, Ohio, and Vermont: All joint tenants must own equal shares of the property. If you want a different arrangement, such as 60%-40% ownership, joint tenancy is not for you.
Alaska: Joint tenancy is not allowed for real estate, but married spouses may own as tenants by the entirety.
Oregon: A transfer to married spouses creates tenancy by the entirety unless the document clearly states otherwise.
Tennessee: A transfer to husband and wife creates tenancy by the entirety, not joint tenancy.
Wisconsin: Joint tenancy is not available between spouses, but survivorship marital property is.
Learn more about tenancy by the entirety, which has many similarities to joint tenancy, but is available only to married couples.
Especially when it comes to real estate, all law is local, so be sure you know your state's rules on what language is required to create a joint tenancy with the right of survivorship. While "as joint tenants with right of survivorship" works in many situations, the specific laws of your state might vary slightly. Joint tenancy deeds can look a little different, depending on your state. If you're not sure, talk to a local real estate lawyer. Here are just a few special state rules.
Michigan: Michigan has two forms of joint tenancy. A traditional joint tenancy is formed when property is transferred to two or more persons using the language "as joint tenants and not as tenants in common." Any owner may terminate the joint tenancy unilaterally (without the consent of the other owner).
If, however, property is transferred to the new owners using the language "as joint tenants with right of survivorship" or to the new owners "and the survivor of them," the result is different. No owner can destroy this joint tenancy unilaterally. Even if you transfer your interest to someone else, that person takes it subject to the rights of your original co-owner. So if you were to die before your original co-owner, that co-owner would automatically own the whole property.
EXAMPLE: Alice and Ben own land in Michigan as "joint tenants with full right of survivorship." Alice sells her interest to Catherine and dies a few years later, while Ben is still alive. Ben now owns the whole property; Catherine owns nothing.
Oregon: Oregon doesn't use the term "joint tenancy"; instead, you create a survivorship estate. The result is the same as with a joint tenancy: when one owner dies, the surviving owner owns the whole property. But technically, creating a survivorship estate creates what the lawyers call "a tenancy in common in the life estate with cross-contingent remainders in the fee simple." (That clears it up, doesn't it?)
South Carolina: To hold real estate in joint tenancy, the deed should use the words "as joint tenants with rights of survivorship, and not as tenants in common," just to make it crystal clear. (S.C. Code Ann. § 27-7-40.)
Texas: If you want to set up a joint tenancy in Texas, you and the other joint tenants might have to sign a written agreement. For example, if you want to create a joint tenancy bank account, so that the survivor will get all the funds, specifying your arrangement on the bank's signature card may not be enough. Fortunately, a bank or real estate office should be able to give you a fill-in-the-blanks form.
Take this requirement seriously. A dispute over such an account ended up in the Texas Supreme Court. Two sisters had set up an account together, using a signature card that allowed the survivor to withdraw the funds. But when one sister died, and the other withdrew the funds, the estate of the deceased sister sued—and won the funds—because the signature card's language didn't satisfy the requirements of the Texas statute. (Stauffer v. Henderson, 801 S.W.2d 858 (Tex. 1991).) More recently, the Texas Supreme Court ruled that a married couple who owned investment accounts labeled "JT TEN" did have survivorship rights, even though they hadn't signed anything stating whether or not the account had a survivorship feature. Holmes v. Beatty, 290 S.W.3d 852 (Tex. 2009). But it's still better to be explicit about your intentions.
Joint tenancy and the different ways of co-owning property can be complicated. If you're dealing with the co-owned property of a loved one who died, and you're not sure how they co-owned it or what the implications are, find a probate attorney to help. | Context Block: To create a joint tenancy, be sure to get the right legal words on the deed or title document.
Joint tenancy with the right of survivorship is a popular way to avoid probate. It certainly has the virtue of simplicity. To create a joint tenancy with the right of survivorship, all you need to do is put the right words on the title document, such as a deed to real estate, a car's title slip, or the signature card establishing a bank account.
What exactly is a joint tenancy with right of survivorship (often shortened simply to "joint tenancy")? It's a co-ownership method that comes with the right to take a deceased co-owner's share of the property. If you co-own a piece of property with someone as joint tenants with the right of survivorship, when your co-owner dies, you automatically own their half of the property, and vice versa. (Contrast joint tenancy with a tenancy in common.)
While many use "joint tenancy" interchangeably with "joint tenancy with right of survivorship," and we do so as well in this article, be aware that a few states (such as Texas) have different norms. In situations where you want to be absolutely clear, be sure to include "with right of survivorship."
In the great majority of states, if you and your co-owners own property as "joint tenants with the right of survivorship" or put the abbreviation "JT WROS" after your names on the title document, you not only co-own the property, but you own it in a way that automatically determines who will own it when one of you dies.
A car salesman or bank staffer may assure you that other words are enough. For example, connecting the names of the owners with the word "or," not "and," does create a joint tenancy, in some circumstances, in some states. But it's always better to unambiguously spell out what you want: joint tenancy with right of survivorship.
When Ken and his wife, Janelle, buy a house, they want to take title in joint tenancy. When the deed that transfers the house to them is prepared, all they need to do is tell the title company to identify them on the deed in this way:
Kenneth J. Hartman and Janelle M. Grubcek, as joint tenants with right of survivorship.
There should be no extra cost or paperwork.
Joint tenancy—or a form of ownership that achieves the same probate-avoiding result—is available in all states, although a few impose restrictions, such as the ones summarized below. In addition, one rule applies in every state except Colorado, Connecticut, North Carolina, Ohio, and Vermont: All joint tenants must own equal shares of the property. If you want a different arrangement, such as 60%-40% ownership, joint tenancy is not for you.
Alaska: Joint tenancy is not allowed for real estate, but married spouses may own as tenants by the entirety.
Oregon: A transfer to married spouses creates tenancy by the entirety unless the document clearly states otherwise.
Tennessee: A transfer to husband and wife creates tenancy by the entirety, not joint tenancy.
Wisconsin: Joint tenancy is not available between spouses, but survivorship marital property is.
Learn more about tenancy by the entirety, which has many similarities to joint tenancy, but is available only to married couples.
Especially when it comes to real estate, all law is local, so be sure you know your state's rules on what language is required to create a joint tenancy with the right of survivorship. While "as joint tenants with right of survivorship" works in many situations, the specific laws of your state might vary slightly. Joint tenancy deeds can look a little different, depending on your state. If you're not sure, talk to a local real estate lawyer. Here are just a few special state rules.
Michigan: Michigan has two forms of joint tenancy. A traditional joint tenancy is formed when property is transferred to two or more persons using the language "as joint tenants and not as tenants in common." Any owner may terminate the joint tenancy unilaterally (without the consent of the other owner).
If, however, property is transferred to the new owners using the language "as joint tenants with right of survivorship" or to the new owners "and the survivor of them," the result is different. No owner can destroy this joint tenancy unilaterally. Even if you transfer your interest to someone else, that person takes it subject to the rights of your original co-owner. So if you were to die before your original co-owner, that co-owner would automatically own the whole property.
EXAMPLE: Alice and Ben own land in Michigan as "joint tenants with full right of survivorship." Alice sells her interest to Catherine and dies a few years later, while Ben is still alive. Ben now owns the whole property; Catherine owns nothing.
Oregon: Oregon doesn't use the term "joint tenancy"; instead, you create a survivorship estate. The result is the same as with a joint tenancy: when one owner dies, the surviving owner owns the whole property. But technically, creating a survivorship estate creates what the lawyers call "a tenancy in common in the life estate with cross-contingent remainders in the fee simple." (That clears it up, doesn't it?)
South Carolina: To hold real estate in joint tenancy, the deed should use the words "as joint tenants with rights of survivorship, and not as tenants in common," just to make it crystal clear. (S.C. Code Ann. § 27-7-40.)
Texas: If you want to set up a joint tenancy in Texas, you and the other joint tenants might have to sign a written agreement. For example, if you want to create a joint tenancy bank account, so that the survivor will get all the funds, specifying your arrangement on the bank's signature card may not be enough. Fortunately, a bank or real estate office should be able to give you a fill-in-the-blanks form.
Take this requirement seriously. A dispute over such an account ended up in the Texas Supreme Court. Two sisters had set up an account together, using a signature card that allowed the survivor to withdraw the funds. But when one sister died, and the other withdrew the funds, the estate of the deceased sister sued—and won the funds—because the signature card's language didn't satisfy the requirements of the Texas statute. (Stauffer v. Henderson, 801 S.W.2d 858 (Tex. 1991).) More recently, the Texas Supreme Court ruled that a married couple who owned investment accounts labeled "JT TEN" did have survivorship rights, even though they hadn't signed anything stating whether or not the account had a survivorship feature. Holmes v. Beatty, 290 S.W.3d 852 (Tex. 2009). But it's still better to be explicit about your intentions.
Joint tenancy and the different ways of co-owning property can be complicated. If you're dealing with the co-owned property of a loved one who died, and you're not sure how they co-owned it or what the implications are, find a probate attorney to help.
System Instructions: Use only the information provided above to answer the question. Answer in paragraph form and keep your answer to under 150 words.
Question: What happens if a home title is listed as Joint Tenants with Rights of Survivorship when one of the owners sells their share of the property to someone else? |
Information will be provided and you are to answer the questions based only on the information provided. Do not consult the internet or use prior knowledge. Please answer concisely. | How do the different segments compare in terms of financial results in the period discussed? | HAMPTON, N.H., Aug. 6, 2024 /PRNewswire/ -- Today, Planet Fitness, Inc. (NYSE: PLNT) reported financial results for its second quarter ended June 30, 2024.
"Since I stepped into the CEO role in June, I have become even more confident and excited about my decision to join such an iconic brand, supported by a strong foundation and team, a solid base of approximately 100 franchisees, and approximately 19.7 million members," said Colleen Keating, Chief Executive Officer. "During the quarter, we continued to demonstrate the unique strength of our asset-light, highly franchised business model by refinancing a portion of our debt and entering a $280 million accelerated share repurchase program as we strive to deliver enhanced shareholder value."
Ms. Keating continued, "As we enter our next chapter, we are committed to further defining our growth ambition and capitalizing on the meaningful opportunities across the industry both in the U.S. and internationally. This includes maintaining a steadfast focus on delivering an unparalleled member experience, evolving our brand messaging and operating under the principle that when our franchisees win, we win. By doing so, I'm confident in our potential for long-term sustainable growth of stores and members, and our ability to deliver significant value for shareholders."
Second Quarter Fiscal 2024 Highlights
- Total revenue increased from the prior year period by 5.1% to $300.9 million.
- System-wide same store sales increased 4.2%.
- System-wide sales increased to $1.2 billion from $1.1 billion in the prior year period.
- Net income attributable to Planet Fitness, Inc. was $48.6 million, or $0.56 per diluted share, compared to $41.1 million, or $0.48 per diluted share, in the prior year period.
- Net income increased $5.1 million to $49.3 million, compared to $44.2 million in the prior year period.
- Adjusted net income(1) increased $4.5 million to $62.2 million, or $0.71 per diluted share(1), compared to $57.7 million, or $0.65 per diluted share, in the prior year period.
- Adjusted EBITDA(1) increased $8.6 million to $127.5 million from $118.9 million in the prior year period.
- 18 new Planet Fitness stores were opened system-wide during the period, which included 17 franchisee-owned and 1 corporate-owned stores, bringing system-wide total stores to 2,617 as of June 30, 2024.
- Cash and marketable securities of $447.7 million, which includes cash and cash equivalents of $247.0 million, restricted cash of $47.8 million and marketable securities of $152.9 million as of June 30, 2024.
(1) Adjusted net income, Adjusted EBITDA and Adjusted net income per share, diluted are non-GAAP measures. For
reconciliations of Adjusted EBITDA and Adjusted net income to U.S. GAAP ("GAAP") net income and a computation
of Adjusted net income per share, diluted, see "Non-GAAP Financial Measures" accompanying this press release.
Operating Results for the Second Quarter Ended June 30, 2024
For the second quarter of 2024, total revenue increased $14.5 million or 5.1% to $300.9 million from $286.5 million
in the prior year period, including system-wide same store sales growth of 4.2%. By segment:
- Franchise segment revenue increased $8.9 million or 9.1% to $107.8 million from $98.8 million in the prior year period. Of the increase, $6.3 million was due to higher royalty revenue, of which $3.1 million was attributable to a franchise same store sales increase of 4.3%, $1.8 million was attributable to new stores opened since April 1, 2023 and $1.3 million was from higher royalties on annual fees. Franchise segment revenue also includes $2.1 million of higher National Advertising Fund ("NAF") revenue;
- Corporate-owned stores segment revenue increased $11.7 million or 10.3% to $125.5 million from $113.8 million in the prior year period. Of the increase, $6.6 million was attributable to corporate-owned stores included in the same store sales base, of which $1.9 million was attributable to a same store sales increase of 4.0%, $1.9 million was attributable to higher annual fee revenue and $2.9 million was attributable to other fees. Additionally, $5.1 million was from new stores opened and acquired since April 1, 2023; and
- Equipment segment revenue decreased $6.2 million or 8.4% to $67.7 million from $73.9 million in the prior year period. Of the decrease, $4.7 million was due to lower revenue from equipment sales to new franchisee-owned stores and $1.5 million was due to lower revenue from equipment sales to existing franchisee-owned stores. In the second quarter of 2024, we had equipment sales to 18 new franchisee-owned stores compared to 26 in the prior year period.
For the second quarter of 2024, net income attributable to Planet Fitness, Inc. was $48.6 million, or $0.56 per diluted share, compared to $41.1 million, or $0.48 per diluted share, in the prior year period. Net income was $49.3 million in the second quarter of 2024 compared to $44.2 million in the prior year period. Adjusted net income increased 7.8% to $62.2 million, or $0.71 per diluted share, from $57.7 million, or $0.65 per diluted share, in the prior year period. Adjusted net income has been adjusted to reflect a normalized income tax rate of 25.8% and 25.9% for the second quarter of 2024 and 2023, respectively, and excludes certain non-cash and other items that we do not consider in the evaluation of ongoing operational performance (see "Non-GAAP Financial Measures").
Adjusted EBITDA, which is defined as net income before interest, taxes, depreciation and amortization, adjusted for the impact of certain non-cash and other items that we do not consider in the evaluation of ongoing operational performance (see "Non-GAAP Financial Measures"), increased 7.2% to $127.5 million from $118.9 million in the prior year period.Segment EBITDA represents our Total Segment EBITDA broken down by the Company's reportable segments.
Total Segment EBITDA is equal to EBITDA, which is defined as net income before interest, taxes, depreciation and amortization (see "Non-GAAP Financial Measures").
- Franchise segment EBITDA increased $11.3 million or 17.1% to $77.4 million. The increase is primarily the result of a $8.9 million increase in franchise segment revenue as described above, as well as a $3.1 million legal reserve that negatively impacted the second quarter of 2023 and $1.5 million of lower selling, general and administrative expense in the second quarter of 2024, partially offset by $2.2 million of higher NAF expense;
- Corporate-owned stores segment EBITDA increased $0.6 million or 1.2% to $49.3 million. The increase was primarily attributable to $0.8 million from the corporate-owned same store sales increase of 4.0%.
- Equipment segment EBITDA increased $1.4 million or 8.4% to $18.6 million. The increase was primarily driven by higher margin equipment sales related to an updated equipment mix as a result of the adoption of the new growth model. | Information will be provided and you are to answer the questions based only on the information provided. Do not consult the internet or use prior knowledge. Please answer concisely.
HAMPTON, N.H., Aug. 6, 2024 /PRNewswire/ -- Today, Planet Fitness, Inc. (NYSE: PLNT) reported financial results for its second quarter ended June 30, 2024.
"Since I stepped into the CEO role in June, I have become even more confident and excited about my decision to join such an iconic brand, supported by a strong foundation and team, a solid base of approximately 100 franchisees, and approximately 19.7 million members," said Colleen Keating, Chief Executive Officer. "During the quarter, we continued to demonstrate the unique strength of our asset-light, highly franchised business model by refinancing a portion of our debt and entering a $280 million accelerated share repurchase program as we strive to deliver enhanced shareholder value."
Ms. Keating continued, "As we enter our next chapter, we are committed to further defining our growth ambition and capitalizing on the meaningful opportunities across the industry both in the U.S. and internationally. This includes maintaining a steadfast focus on delivering an unparalleled member experience, evolving our brand messaging and operating under the principle that when our franchisees win, we win. By doing so, I'm confident in our potential for long-term sustainable growth of stores and members, and our ability to deliver significant value for shareholders."
Second Quarter Fiscal 2024 Highlights
- Total revenue increased from the prior year period by 5.1% to $300.9 million.
- System-wide same store sales increased 4.2%.
- System-wide sales increased to $1.2 billion from $1.1 billion in the prior year period.
- Net income attributable to Planet Fitness, Inc. was $48.6 million, or $0.56 per diluted share, compared to $41.1 million, or $0.48 per diluted share, in the prior year period.
- Net income increased $5.1 million to $49.3 million, compared to $44.2 million in the prior year period.
- Adjusted net income(1) increased $4.5 million to $62.2 million, or $0.71 per diluted share(1), compared to $57.7 million, or $0.65 per diluted share, in the prior year period.
- Adjusted EBITDA(1) increased $8.6 million to $127.5 million from $118.9 million in the prior year period.
- 18 new Planet Fitness stores were opened system-wide during the period, which included 17 franchisee-owned and 1 corporate-owned stores, bringing system-wide total stores to 2,617 as of June 30, 2024.
- Cash and marketable securities of $447.7 million, which includes cash and cash equivalents of $247.0 million, restricted cash of $47.8 million and marketable securities of $152.9 million as of June 30, 2024.
(1) Adjusted net income, Adjusted EBITDA and Adjusted net income per share, diluted are non-GAAP measures. For
reconciliations of Adjusted EBITDA and Adjusted net income to U.S. GAAP ("GAAP") net income and a computation
of Adjusted net income per share, diluted, see "Non-GAAP Financial Measures" accompanying this press release.
Operating Results for the Second Quarter Ended June 30, 2024
For the second quarter of 2024, total revenue increased $14.5 million or 5.1% to $300.9 million from $286.5 million
in the prior year period, including system-wide same store sales growth of 4.2%. By segment:
- Franchise segment revenue increased $8.9 million or 9.1% to $107.8 million from $98.8 million in the prior year period. Of the increase, $6.3 million was due to higher royalty revenue, of which $3.1 million was attributable to a franchise same store sales increase of 4.3%, $1.8 million was attributable to new stores opened since April 1, 2023 and $1.3 million was from higher royalties on annual fees. Franchise segment revenue also includes $2.1 million of higher National Advertising Fund ("NAF") revenue;
- Corporate-owned stores segment revenue increased $11.7 million or 10.3% to $125.5 million from $113.8 million in the prior year period. Of the increase, $6.6 million was attributable to corporate-owned stores included in the same store sales base, of which $1.9 million was attributable to a same store sales increase of 4.0%, $1.9 million was attributable to higher annual fee revenue and $2.9 million was attributable to other fees. Additionally, $5.1 million was from new stores opened and acquired since April 1, 2023; and
- Equipment segment revenue decreased $6.2 million or 8.4% to $67.7 million from $73.9 million in the prior year period. Of the decrease, $4.7 million was due to lower revenue from equipment sales to new franchisee-owned stores and $1.5 million was due to lower revenue from equipment sales to existing franchisee-owned stores. In the second quarter of 2024, we had equipment sales to 18 new franchisee-owned stores compared to 26 in the prior year period.
For the second quarter of 2024, net income attributable to Planet Fitness, Inc. was $48.6 million, or $0.56 per diluted share, compared to $41.1 million, or $0.48 per diluted share, in the prior year period. Net income was $49.3 million in the second quarter of 2024 compared to $44.2 million in the prior year period. Adjusted net income increased 7.8% to $62.2 million, or $0.71 per diluted share, from $57.7 million, or $0.65 per diluted share, in the prior year period. Adjusted net income has been adjusted to reflect a normalized income tax rate of 25.8% and 25.9% for the second quarter of 2024 and 2023, respectively, and excludes certain non-cash and other items that we do not consider in the evaluation of ongoing operational performance (see "Non-GAAP Financial Measures").
Adjusted EBITDA, which is defined as net income before interest, taxes, depreciation and amortization, adjusted for the impact of certain non-cash and other items that we do not consider in the evaluation of ongoing operational performance (see "Non-GAAP Financial Measures"), increased 7.2% to $127.5 million from $118.9 million in the prior year period.Segment EBITDA represents our Total Segment EBITDA broken down by the Company's reportable segments.
Total Segment EBITDA is equal to EBITDA, which is defined as net income before interest, taxes, depreciation and amortization (see "Non-GAAP Financial Measures").
- Franchise segment EBITDA increased $11.3 million or 17.1% to $77.4 million. The increase is primarily the result of a $8.9 million increase in franchise segment revenue as described above, as well as a $3.1 million legal reserve that negatively impacted the second quarter of 2023 and $1.5 million of lower selling, general and administrative expense in the second quarter of 2024, partially offset by $2.2 million of higher NAF expense;
- Corporate-owned stores segment EBITDA increased $0.6 million or 1.2% to $49.3 million. The increase was primarily attributable to $0.8 million from the corporate-owned same store sales increase of 4.0%.
- Equipment segment EBITDA increased $1.4 million or 8.4% to $18.6 million. The increase was primarily driven by higher margin equipment sales related to an updated equipment mix as a result of the adoption of the new growth model.
How do the different segments compare in terms of financial results in the period discussed? |
Give an answer using only the context provided. | Can you provide a summary of the key points discussed in the document segment given, regarding infrastructure inequity and its impact on racial disparities? | It should be obvious that a broad and deep
investment in the nation’s long-neglected
and now failing infrastructure is necessary
to ensure the United States continues to be a
leading, prosperous democracy among nations.
A sound infrastructure helps us all – individuals,
communities, businesses, and government—
urban and rural. For those of us who have been
long disadvantaged in this nation through
structural racism and discrimination, however,
a sound infrastructure in every community
is especially critical as a bulwark against
the pernicious harms of discrimination and
segregation. Having a solid infrastructure
on which everyone stands helps counter
structural inequities driven by segregation
and longstanding differences in investments in
communities based on race.
Unequal investment is one of two types of
inequities stemming from our historic and
current infrastructure policies and practices.
There is inequity directly via unequal and
inadequate investments in Black communities,
and there is also an indirect inequity because the
harm from failing infrastructure is more severe
for Black communities. Black communities are
disproportionately low-wealth communities,
and people with little wealth commonly lack
the resources to protect themselves and to
recover quickly from disasters resulting from
infrastructure failures. When we fail to make
adequate infrastructure investments, we
subject African Americans to high risks of harm
from infrastructure failures.
This brief provides an overview of the need for a
broad range of infrastructure investments and
provides examples of both types of inequities.
While it focuses on African Americans, it
should be clear that other groups, particularly
Latinos, Native Americans, and low-wealth
individuals, are also disproportionately harmed
by our failure to invest adequately in America’s
infrastructure.
A Comprehensive Approach to Sound
Infrastructure Is an Important Counter
to Historic Racial Inequity
TMI BRIEFS AUGUST 2021
MORE THAN ROADS AND BRIDGES
Electronic copy available at: https://ssrn.com/abstract=4722017
2 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
When people hear the word “infrastructure,”
they often think of roads and bridges. There
is no question that roads and bridges are
infrastructure, but as civil rights leaders have
urged the nation to recognize, infrastructure
entails far more than just these two things.
Some argue that infrastructure
only encompasses roads,
bridges, tunnels, and railroads
and while those are all vital,
this definition is woefully
inadequate. Infrastructure
includes sewer systems, water
lines, waste facilities, and
telecommunications. It also
includes parks, housing, public
squares, economic centers, and
schools.1
Every four years, the American Society of
Civil Engineers (ASCE) assesses America’s
infrastructure and produces a report card.
ASCE evaluates 17 types of infrastructure and
is beginning to recognize the importance of
broadband.2 Roads and bridges are only two of
the 17. We argue for an even broader conception
of infrastructure than ASCE and recognize that
each form of infrastructure is important to the
future of the United States broadly, but also of
particular importance to African Americans.
We will illustrate this point by focusing on ten
types of infrastructure considered by ASCE and
their relevance for African Americans. We will
also address two types of infrastructure not
evaluated by ACSE: affordable housing and the
care economy.
Roads and Bridges, and a Whole Lot More
TMI BRIEFS AUGUST 2021
Electronic copy available at: https://ssrn.com/abstract=4722017
3 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
DAMS
ASCE’s current overall rating of America’s
infrastructure is a C-minus.3 A C grade means
that the infrastructure “shows general signs
of deterioration and requires attention.”4 A D
grade means that the infrastructure has “many
elements approaching the end of their service
life.”5 A C-minus grade, therefore, suggests
that much of America’s infrastructure is
deteriorating, and some of it is near the end of
its service life.
ASCE estimates that the country needs to
invest $2.59 trillion over the next ten years to
bring all of the country’s infrastructure to a good
condition.6 This expenditure is an investment
that will contribute to future economic growth
and not an expense that will simply drain our
resources. If we fail to make these investments
by 2039, ASCE estimates that our economy
will lose $10 trillion in GDP, more than three
million jobs, and $2.4 trillion in exports.7
These
numbers do not account for the lives lost, the
life expectancies reduced, and the suffering
that is caused by poor infrastructure.
AVIATION D+
LEVEES
ENERGY
ROADS
PUBLIC PARKS
INLAND WATERWAYS
SOLID WASTE
TRANSIT
BRIDGES
PORTS
HAZARDOUS WASTE
SCHOOLS
DRINKING WATER
RAIL
DRINKING WATER
STORMWATER
WASTEWATER
C
D
CD+
D
D+
D
C+
DCD+
D
BB
D+
D
D+
SOURCE: ASCE 2021 Report Card for America's Infrastructure
Electronic copy available at: https://ssrn.com/abstract=4722017
4 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
To appreciate the importance of roads and
bridges for African Americans, it is useful to
look at Mississippi, which is the state with the
largest share of African American residents,8
at nearly 40%.9 ASCE gives the nation’s roads
a D grade and the nation’s bridges a C grade.10
Mississippi’s roads and bridges are considerably
worse than the national average, with both rated
D-minus.11
ASCE finds that only 24% of Mississippi’s major
roads and highways are in good condition.
Forty-three percent are in poor condition,
and the remaining 33% are in mediocre or
fair condition.12 Bad roads impose costs
on motorists. For example, in Southaven,
Mississippi, ASCE estimates that damage from
bad roads costs the average driver $1,870
a year.13 This amounts to 6% of the median
household income for Black Mississippians,
and 3% for White residents of the state.14 ASCE
values the lost time due to drivers being stuck
in traffic in Southaven at an additional $1,080
per driver.15
Many Americans would struggle to pay for a
vehicle repair bill of $1,870—or even half as
much.16 For Black Mississippians, who have
lower incomes than both average Americans
and White Mississippians 17 the struggle is likely
to be considerably harder.18 These repair bills
could easily cause lasting damage to Black
households in the state. When people are unable
to use their vehicles, there is considerable
hardship because, for much of America,
Mississippi included, many day-to-day activities
Roads and Bridges: Costly and Unsafe
MISSISSIPPI’S MAJOR ROADS
AND HIGHWAYS CONDITION
24%
43%
33%
GOOD POOR FAIR
TMI BRIEFS AUGUST 2021
Electronic copy available at: https://ssrn.com/abstract=4722017
5 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
condition, and 9% of them need substantial
repairs.20 Over 400 Mississippi bridges have
been closed because they are unsafe. There
are many weight-restricted bridges that cannot
support a load heavier than a pickup truck.21
As illustrated, roads and bridges are important
to African Americans, but these are not their
only important infrastructure needs.
require access to a private vehicle. The loss of
access to a vehicle could lead to the loss of a
job, the inability to access health care, or the
inability to vote. Individuals might need to turn
to high-interest loans to pay for repairs, leading
to substantial debt. Alternatively, individuals
might be forced to drive an unsafe vehicle and
put their health and the health of others at risk.
There is another health risk from Mississippi’s
bad roads. Mississippi has one of the highest
automotive fatality rates in the country. The
state’s bad roads are implicated in about a third
of the deaths.19
As mentioned above, Mississippi’s bridges also
received a D-minus grade. Among the reasons
Mississippi’s bridges earn such a poor grade is
because only 63% of them are in good condition.
More than a quarter of them (28%) are in fair
MISSISSIPPI’S
BRIDGES HAVE
BEEN CLOSED
BECAUSE THEY
ARE UNSAFE
Lawrence Sawyer/Getty Images
Electronic copy available at: https://ssrn.com/abstract=4722017
6 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
While Black people do not comprise a large
percentage of the population of Texas, by the
numbers, more Black people live in Texas than
in any other state.22 This year, a severe winter
storm shut down the electrical grid in Texas,
causing many people to go without heat and
water for several days.23 This caused a severe
crisis, resulting in almost 200 deaths, including
people freezing to death, dying from carbon
monoxide poisoning when they were forced to
rely on dangerous sources of heat, and people
dying when their medical devices failed, or
they were unable to get life-saving medical
treatment.24 The Houston Chronicle reported
that [t]he deaths come from 57 counties in all
regions of the state but are disproportionately
centered on the Houston area, which at times
during the crisis accounted for nearly half of
all power outages. Of the known ages, races
and ethnicities of the victims, 74 percent were
people of color. Half were at least 65. Six were
children.25
Energy: The Need to Move
Away from Fossil Fuels
TMI BRIEFS AUGUST 2021
HOUSTON, Feb. 15, 2021 -- A highway is closed due to snow and ice in Houston, Texas, the United States, on Feb. 15, 2021. Up to 2.5 million customers were
without power in the U.S. state of Texas Monday morning as the state's power generation capacity is impacted by an ongoing winter storm brought by
Arctic blast. Photo by Chengyue Lao/Xinhua via Getty Images
Electronic copy available at: https://ssrn.com/abstract=4722017
7 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
The Texas blackout during a severe winter storm
is a foreshadowing of future catastrophes,
as climate change will bring more extreme
weather.26 Power failures have increased by
more than 60% across the nation since 2015.27
A sustained power failure during a heatwave
could be more deadly than one during extremely
cold weather.28 Already, in early June 2021,
the Electric Reliability Council of Texas urged
Texans “to turn down thermostats and cut back
electricity use” after the reserve of available
electricity had shrunk to near critical levels.29
Our energy systems—our engines and our power
plants—mainly rely on fossil fuels that produce
greenhouse gases that lead to climate change.
Climate change causes extreme weather events
that are expected to exceed the capacity of our
infrastructure.30 To address this problem, we
need to move away from fossils fuels to help
limit the damage from climate change,31 and
we need to design our infrastructure with the
awareness that weather that used to be seen as
extreme will be increasingly normal. 32
Climate change will be more harmful to African
Americans. The negative economic impact from
climate change is expected to be most severe
in the Southern United States, where the Black
population is concentrated.33 Additionally, White
Americans have greater wealth to endure natural
disasters stemming from climate change,34 and
the requirements for receiving government
aid in disaster areas are structured in ways
to disproportionately benefit wealthy White
homeowners.35 Consequently, researchers are
finding that natural disasters widen existing
inequalities.36
While there is much damage from climate
change expected in the future, African
Americans have been living with the harm from
the pollution and toxins from burning fossil fuels
for generations. African Americans are more
likely to live near fossil-fuel power plants, and
they are “exposed to 1.5 times as much of the
sooty pollution that comes from burning fossil
fuels as the population at large.”37 Exposure
to fossil-fuel pollutants increases the risk of
preterm births, asthma, cancer, and other
ailments.38 Moving to clean renewable energy
will bring significant health benefits to African
Americans.39
TMI BRIEFS AUGUST 2021
Climate change
will be more harmful to
African Americans. | Give an answer using only the context provided.
Can you provide a summary of the key points discussed in the document segment given, regarding infrastructure inequity and its impact on racial disparities?
It should be obvious that a broad and deep
investment in the nation’s long-neglected
and now failing infrastructure is necessary
to ensure the United States continues to be a
leading, prosperous democracy among nations.
A sound infrastructure helps us all – individuals,
communities, businesses, and government—
urban and rural. For those of us who have been
long disadvantaged in this nation through
structural racism and discrimination, however,
a sound infrastructure in every community
is especially critical as a bulwark against
the pernicious harms of discrimination and
segregation. Having a solid infrastructure
on which everyone stands helps counter
structural inequities driven by segregation
and longstanding differences in investments in
communities based on race.
Unequal investment is one of two types of
inequities stemming from our historic and
current infrastructure policies and practices.
There is inequity directly via unequal and
inadequate investments in Black communities,
and there is also an indirect inequity because the
harm from failing infrastructure is more severe
for Black communities. Black communities are
disproportionately low-wealth communities,
and people with little wealth commonly lack
the resources to protect themselves and to
recover quickly from disasters resulting from
infrastructure failures. When we fail to make
adequate infrastructure investments, we
subject African Americans to high risks of harm
from infrastructure failures.
This brief provides an overview of the need for a
broad range of infrastructure investments and
provides examples of both types of inequities.
While it focuses on African Americans, it
should be clear that other groups, particularly
Latinos, Native Americans, and low-wealth
individuals, are also disproportionately harmed
by our failure to invest adequately in America’s
infrastructure.
A Comprehensive Approach to Sound
Infrastructure Is an Important Counter
to Historic Racial Inequity
TMI BRIEFS AUGUST 2021
MORE THAN ROADS AND BRIDGES
Electronic copy available at: https://ssrn.com/abstract=4722017
2 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
When people hear the word “infrastructure,”
they often think of roads and bridges. There
is no question that roads and bridges are
infrastructure, but as civil rights leaders have
urged the nation to recognize, infrastructure
entails far more than just these two things.
Some argue that infrastructure
only encompasses roads,
bridges, tunnels, and railroads
and while those are all vital,
this definition is woefully
inadequate. Infrastructure
includes sewer systems, water
lines, waste facilities, and
telecommunications. It also
includes parks, housing, public
squares, economic centers, and
schools.1
Every four years, the American Society of
Civil Engineers (ASCE) assesses America’s
infrastructure and produces a report card.
ASCE evaluates 17 types of infrastructure and
is beginning to recognize the importance of
broadband.2 Roads and bridges are only two of
the 17. We argue for an even broader conception
of infrastructure than ASCE and recognize that
each form of infrastructure is important to the
future of the United States broadly, but also of
particular importance to African Americans.
We will illustrate this point by focusing on ten
types of infrastructure considered by ASCE and
their relevance for African Americans. We will
also address two types of infrastructure not
evaluated by ACSE: affordable housing and the
care economy.
Roads and Bridges, and a Whole Lot More
TMI BRIEFS AUGUST 2021
Electronic copy available at: https://ssrn.com/abstract=4722017
3 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
DAMS
ASCE’s current overall rating of America’s
infrastructure is a C-minus.3 A C grade means
that the infrastructure “shows general signs
of deterioration and requires attention.”4 A D
grade means that the infrastructure has “many
elements approaching the end of their service
life.”5 A C-minus grade, therefore, suggests
that much of America’s infrastructure is
deteriorating, and some of it is near the end of
its service life.
ASCE estimates that the country needs to
invest $2.59 trillion over the next ten years to
bring all of the country’s infrastructure to a good
condition.6 This expenditure is an investment
that will contribute to future economic growth
and not an expense that will simply drain our
resources. If we fail to make these investments
by 2039, ASCE estimates that our economy
will lose $10 trillion in GDP, more than three
million jobs, and $2.4 trillion in exports.7
These
numbers do not account for the lives lost, the
life expectancies reduced, and the suffering
that is caused by poor infrastructure.
AVIATION D+
LEVEES
ENERGY
ROADS
PUBLIC PARKS
INLAND WATERWAYS
SOLID WASTE
TRANSIT
BRIDGES
PORTS
HAZARDOUS WASTE
SCHOOLS
DRINKING WATER
RAIL
DRINKING WATER
STORMWATER
WASTEWATER
C
D
CD+
D
D+
D
C+
DCD+
D
BB
D+
D
D+
SOURCE: ASCE 2021 Report Card for America's Infrastructure
Electronic copy available at: https://ssrn.com/abstract=4722017
4 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
To appreciate the importance of roads and
bridges for African Americans, it is useful to
look at Mississippi, which is the state with the
largest share of African American residents,8
at nearly 40%.9 ASCE gives the nation’s roads
a D grade and the nation’s bridges a C grade.10
Mississippi’s roads and bridges are considerably
worse than the national average, with both rated
D-minus.11
ASCE finds that only 24% of Mississippi’s major
roads and highways are in good condition.
Forty-three percent are in poor condition,
and the remaining 33% are in mediocre or
fair condition.12 Bad roads impose costs
on motorists. For example, in Southaven,
Mississippi, ASCE estimates that damage from
bad roads costs the average driver $1,870
a year.13 This amounts to 6% of the median
household income for Black Mississippians,
and 3% for White residents of the state.14 ASCE
values the lost time due to drivers being stuck
in traffic in Southaven at an additional $1,080
per driver.15
Many Americans would struggle to pay for a
vehicle repair bill of $1,870—or even half as
much.16 For Black Mississippians, who have
lower incomes than both average Americans
and White Mississippians 17 the struggle is likely
to be considerably harder.18 These repair bills
could easily cause lasting damage to Black
households in the state. When people are unable
to use their vehicles, there is considerable
hardship because, for much of America,
Mississippi included, many day-to-day activities
Roads and Bridges: Costly and Unsafe
MISSISSIPPI’S MAJOR ROADS
AND HIGHWAYS CONDITION
24%
43%
33%
GOOD POOR FAIR
TMI BRIEFS AUGUST 2021
Electronic copy available at: https://ssrn.com/abstract=4722017
5 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
condition, and 9% of them need substantial
repairs.20 Over 400 Mississippi bridges have
been closed because they are unsafe. There
are many weight-restricted bridges that cannot
support a load heavier than a pickup truck.21
As illustrated, roads and bridges are important
to African Americans, but these are not their
only important infrastructure needs.
require access to a private vehicle. The loss of
access to a vehicle could lead to the loss of a
job, the inability to access health care, or the
inability to vote. Individuals might need to turn
to high-interest loans to pay for repairs, leading
to substantial debt. Alternatively, individuals
might be forced to drive an unsafe vehicle and
put their health and the health of others at risk.
There is another health risk from Mississippi’s
bad roads. Mississippi has one of the highest
automotive fatality rates in the country. The
state’s bad roads are implicated in about a third
of the deaths.19
As mentioned above, Mississippi’s bridges also
received a D-minus grade. Among the reasons
Mississippi’s bridges earn such a poor grade is
because only 63% of them are in good condition.
More than a quarter of them (28%) are in fair
MISSISSIPPI’S
BRIDGES HAVE
BEEN CLOSED
BECAUSE THEY
ARE UNSAFE
Lawrence Sawyer/Getty Images
Electronic copy available at: https://ssrn.com/abstract=4722017
6 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
While Black people do not comprise a large
percentage of the population of Texas, by the
numbers, more Black people live in Texas than
in any other state.22 This year, a severe winter
storm shut down the electrical grid in Texas,
causing many people to go without heat and
water for several days.23 This caused a severe
crisis, resulting in almost 200 deaths, including
people freezing to death, dying from carbon
monoxide poisoning when they were forced to
rely on dangerous sources of heat, and people
dying when their medical devices failed, or
they were unable to get life-saving medical
treatment.24 The Houston Chronicle reported
that [t]he deaths come from 57 counties in all
regions of the state but are disproportionately
centered on the Houston area, which at times
during the crisis accounted for nearly half of
all power outages. Of the known ages, races
and ethnicities of the victims, 74 percent were
people of color. Half were at least 65. Six were
children.25
Energy: The Need to Move
Away from Fossil Fuels
TMI BRIEFS AUGUST 2021
HOUSTON, Feb. 15, 2021 -- A highway is closed due to snow and ice in Houston, Texas, the United States, on Feb. 15, 2021. Up to 2.5 million customers were
without power in the U.S. state of Texas Monday morning as the state's power generation capacity is impacted by an ongoing winter storm brought by
Arctic blast. Photo by Chengyue Lao/Xinhua via Getty Images
Electronic copy available at: https://ssrn.com/abstract=4722017
7 | TMI Brief | More than Roads and Bridges | tminstituteldf.org
The Texas blackout during a severe winter storm
is a foreshadowing of future catastrophes,
as climate change will bring more extreme
weather.26 Power failures have increased by
more than 60% across the nation since 2015.27
A sustained power failure during a heatwave
could be more deadly than one during extremely
cold weather.28 Already, in early June 2021,
the Electric Reliability Council of Texas urged
Texans “to turn down thermostats and cut back
electricity use” after the reserve of available
electricity had shrunk to near critical levels.29
Our energy systems—our engines and our power
plants—mainly rely on fossil fuels that produce
greenhouse gases that lead to climate change.
Climate change causes extreme weather events
that are expected to exceed the capacity of our
infrastructure.30 To address this problem, we
need to move away from fossils fuels to help
limit the damage from climate change,31 and
we need to design our infrastructure with the
awareness that weather that used to be seen as
extreme will be increasingly normal. 32
Climate change will be more harmful to African
Americans. The negative economic impact from
climate change is expected to be most severe
in the Southern United States, where the Black
population is concentrated.33 Additionally, White
Americans have greater wealth to endure natural
disasters stemming from climate change,34 and
the requirements for receiving government
aid in disaster areas are structured in ways
to disproportionately benefit wealthy White
homeowners.35 Consequently, researchers are
finding that natural disasters widen existing
inequalities.36
While there is much damage from climate
change expected in the future, African
Americans have been living with the harm from
the pollution and toxins from burning fossil fuels
for generations. African Americans are more
likely to live near fossil-fuel power plants, and
they are “exposed to 1.5 times as much of the
sooty pollution that comes from burning fossil
fuels as the population at large.”37 Exposure
to fossil-fuel pollutants increases the risk of
preterm births, asthma, cancer, and other
ailments.38 Moving to clean renewable energy
will bring significant health benefits to African
Americans.39
TMI BRIEFS AUGUST 2021
Climate change
will be more harmful to
African Americans. |
Draw your answer based solely on the text provided in the prompt. | Summarize this text about Vietnam's nonmarket economy status. | Shortly after extending normal trade relations (NTR) status to Vietnam in 2001, the United States designated Vietnam as a “nonmarket economy” (NME) for the purposes of antidumping (AD) and countervailing duty (CVD) investigations. The government of Vietnam has long sought to remove the designation, arguing it may hinder closer bilateral ties. During President Joseph Biden’s September 2023 visit to Hanoi, where he and then-Communist Party of Vietnam (CPV) Secretary-General Nguyen Phu Trong elevated the U.S.-Vietnam relationship to a “comprehensive strategic partnership,” Biden agreed to review Vietnam’s request to review its NME status. The following month, the Department of Commerce initiated an official review. During the review period, some Members of Congress raised concerns over whether Vietnam meets the conditions to be designated as a market economy. On August 2, 2024, Commerce announced its decision to sustain Vietnam’s NME designation, citing the Vietnamese government’s involvement in the economy, despite “substantive reforms,” as a factor for not lifting the designation. U.S.-Vietnam Relations Since 2010, the United States and Vietnam have forged a strategic partnership on many regional security and economic issues, prompted in part by shared concerns about China’s increased assertiveness in the region, and by burgeoning economic links. Over the last decade, Vietnam has become a major manufacturing center and one of the United States’ top ten trading partners. Top U.S. imports from Vietnam include consumer electronics, furniture, semiconductors and parts, apparel, and footwear. Vietnam is the second-largest source of U.S. apparel imports, after China. The September 2023 upgrade in relations was accompanied by several initiatives, including U.S. pledges to support Vietnam's development of its semiconductor industry (including with $2 million in U.S. government funds) and digital infrastructure ($12 million). Additionally, agreements under the U.S.-led, 14-country Indo-Pacific Economic Framework for Prosperity (IPEF) negotiations, which includes Vietnam, may further deepen U.S.-Vietnam economic ties. Under the doi moi (renovation) economic reforms that began in 1986, the Vietnamese government abandoned many aspects of central state planning, cut subsidies to state enterprises, reformed the price system, and opened the country to foreign direct investment (FDI). In a 2022 report, the Organisation for Economic Cooperation and Development (OECD) noted that the number of state-owned enterprises (SOEs) has decreased significantly, but SOEs still account for roughly 30% of the GDP. The U.S. government also actively monitors Vietnam’s currency practices, which were subject to U.S. Congressional Research Service https://crsreports.congress.gov IN12326 CRS INSIGHT Prepared for Members and Committees of Congress Congressional Research Service 2 investigations before the countries reached a bilateral agreement in July 2021. Vietnamese authorities limit daily fluctuations in the Vietnamese dong to 5% against the dollar, adjusted from 3% in 2022. Nonmarket Economy Status under U.S. Trade Laws The Commerce Department has the authority to designate countries as NMEs for the purpose of U.S. AD/CVD laws. An NME is a country that Commerce determines “does not operate on market principles of cost or pricing structures, so that sales of merchandise in such country do not reflect the fair value of merchandise.” In designating a country as an NME, Commerce considers the extent to which (1) the country’s currency is convertible; (2) its wage rates result from free bargaining between labor and management; (3) joint ventures or other foreign investment are permitted; (4) the government owns or controls the means of production; and (5) the government controls the allocation of resources and price and output decisions. Commerce may also consider other factors that it considers appropriate. An NME designation remains in effect until revoked by Commerce. There are currently 12 countries, including Vietnam, designated as NMEs. While considering whether an NME is engaged in dumping, Commerce uses factors of production from a comparable market economy country to calculate the normal value for merchandise alleged to have been dumped in the United States. An affirmative NME designation may lead to higher tariffs. These methods have raised concerns at the WTO that a subsidy may be offset twice when both antidumping and countervailing duties are applied to NME products. After Commerce published the notice initiating the review of Vietnam’s NME status, interested parties submitted comments to Regulations.gov and Commerce held a public hearing on May 8, 2024. Some U.S. manufacturing groups urged Commerce to maintain Vietnam’s NME status, arguing that Vietnam has not met the statutory conditions (above) to lift the NME designation. Others promoted removal of the designation, citing the country’s overall reforms, including openness to foreign investment, free bargaining of wages, and currency convertibility. Considerations for Congress In conducting oversight, Congress may consider the potential implications of an NME designation on U.S.-Vietnam trade, and overall bilateral relations. According to press accounts, Vietnamese government officials have expressed regret over Commerce’s decision to sustain Vietnam’s NME status and some analysts have stated that the decision might hinder bilateral relations. Congress may consider the extent to which Vietnam’s economic reforms might be sufficient to satisfy the statutory conditions should Vietnam submit future requests for review. During the review period, some Members of Congress argued that Vietnam does not meet the conditions, including the prominence of state-owned enterprises (SOE) in Vietnam’s economy, “severe deficiencies” in Vietnam’s labor laws, and potential harm to U.S. industries and workers. The Commerce Department stated in its decision that despite Vietnam’s market-oriented reforms, the government “remains entrenched in many aspects of the Vietnamese economy,” including foreign exchange intervention, control of labor unions, and “significant state ownership and control over the means of production.” Other options for Congress might include linking the decision to other policy areas, such as Vietnam’s human rights records, which some observers say is poor and worsening, and/or foreign assistance. The Vietnam Human Rights Act (H.R. 3172), which would prohibit U.S. assistance to Vietnam's Ministry of Public Security and require the executive branch to put more emphasis on ensuring internet freedom in Vietnam, could be a potential vehicle for Members of Congress who would like to maintain the NME status irrespective of economic policy changes. | system instruction: Draw your answer based solely on the text provided in the prompt.
question: Summarize this text about Vietnam's nonmarket economy status.
context block: Shortly after extending normal trade relations (NTR) status to Vietnam in 2001, the United States designated Vietnam as a “nonmarket economy” (NME) for the purposes of antidumping (AD) and countervailing duty (CVD) investigations. The government of Vietnam has long sought to remove the designation, arguing it may hinder closer bilateral ties. During President Joseph Biden’s September 2023 visit to Hanoi, where he and then-Communist Party of Vietnam (CPV) Secretary-General Nguyen Phu Trong elevated the U.S.-Vietnam relationship to a “comprehensive strategic partnership,” Biden agreed to review Vietnam’s request to review its NME status. The following month, the Department of Commerce initiated an official review. During the review period, some Members of Congress raised concerns over whether Vietnam meets the conditions to be designated as a market economy. On August 2, 2024, Commerce announced its decision to sustain Vietnam’s NME designation, citing the Vietnamese government’s involvement in the economy, despite “substantive reforms,” as a factor for not lifting the designation. U.S.-Vietnam Relations Since 2010, the United States and Vietnam have forged a strategic partnership on many regional security and economic issues, prompted in part by shared concerns about China’s increased assertiveness in the region, and by burgeoning economic links. Over the last decade, Vietnam has become a major manufacturing center and one of the United States’ top ten trading partners. Top U.S. imports from Vietnam include consumer electronics, furniture, semiconductors and parts, apparel, and footwear. Vietnam is the second-largest source of U.S. apparel imports, after China. The September 2023 upgrade in relations was accompanied by several initiatives, including U.S. pledges to support Vietnam's development of its semiconductor industry (including with $2 million in U.S. government funds) and digital infrastructure ($12 million). Additionally, agreements under the U.S.-led, 14-country Indo-Pacific Economic Framework for Prosperity (IPEF) negotiations, which includes Vietnam, may further deepen U.S.-Vietnam economic ties. Under the doi moi (renovation) economic reforms that began in 1986, the Vietnamese government abandoned many aspects of central state planning, cut subsidies to state enterprises, reformed the price system, and opened the country to foreign direct investment (FDI). In a 2022 report, the Organisation for Economic Cooperation and Development (OECD) noted that the number of state-owned enterprises (SOEs) has decreased significantly, but SOEs still account for roughly 30% of the GDP. The U.S. government also actively monitors Vietnam’s currency practices, which were subject to U.S. Congressional Research Service https://crsreports.congress.gov IN12326 CRS INSIGHT Prepared for Members and Committees of Congress Congressional Research Service 2 investigations before the countries reached a bilateral agreement in July 2021. Vietnamese authorities limit daily fluctuations in the Vietnamese dong to 5% against the dollar, adjusted from 3% in 2022. Nonmarket Economy Status under U.S. Trade Laws The Commerce Department has the authority to designate countries as NMEs for the purpose of U.S. AD/CVD laws. An NME is a country that Commerce determines “does not operate on market principles of cost or pricing structures, so that sales of merchandise in such country do not reflect the fair value of merchandise.” In designating a country as an NME, Commerce considers the extent to which (1) the country’s currency is convertible; (2) its wage rates result from free bargaining between labor and management; (3) joint ventures or other foreign investment are permitted; (4) the government owns or controls the means of production; and (5) the government controls the allocation of resources and price and output decisions. Commerce may also consider other factors that it considers appropriate. An NME designation remains in effect until revoked by Commerce. There are currently 12 countries, including Vietnam, designated as NMEs. While considering whether an NME is engaged in dumping, Commerce uses factors of production from a comparable market economy country to calculate the normal value for merchandise alleged to have been dumped in the United States. An affirmative NME designation may lead to higher tariffs. These methods have raised concerns at the WTO that a subsidy may be offset twice when both antidumping and countervailing duties are applied to NME products. After Commerce published the notice initiating the review of Vietnam’s NME status, interested parties submitted comments to Regulations.gov and Commerce held a public hearing on May 8, 2024. Some U.S. manufacturing groups urged Commerce to maintain Vietnam’s NME status, arguing that Vietnam has not met the statutory conditions (above) to lift the NME designation. Others promoted removal of the designation, citing the country’s overall reforms, including openness to foreign investment, free bargaining of wages, and currency convertibility. Considerations for Congress In conducting oversight, Congress may consider the potential implications of an NME designation on U.S.-Vietnam trade, and overall bilateral relations. According to press accounts, Vietnamese government officials have expressed regret over Commerce’s decision to sustain Vietnam’s NME status and some analysts have stated that the decision might hinder bilateral relations. Congress may consider the extent to which Vietnam’s economic reforms might be sufficient to satisfy the statutory conditions should Vietnam submit future requests for review. During the review period, some Members of Congress argued that Vietnam does not meet the conditions, including the prominence of state-owned enterprises (SOE) in Vietnam’s economy, “severe deficiencies” in Vietnam’s labor laws, and potential harm to U.S. industries and workers. The Commerce Department stated in its decision that despite Vietnam’s market-oriented reforms, the government “remains entrenched in many aspects of the Vietnamese economy,” including foreign exchange intervention, control of labor unions, and “significant state ownership and control over the means of production.” Other options for Congress might include linking the decision to other policy areas, such as Vietnam’s human rights records, which some observers say is poor and worsening, and/or foreign assistance. The Vietnam Human Rights Act (H.R. 3172), which would prohibit U.S. assistance to Vietnam's Ministry of Public Security and require the executive branch to put more emphasis on ensuring internet freedom in Vietnam, could be a potential vehicle for Members of Congress who would like to maintain the NME status irrespective of economic policy changes. |
"================
<TEXT PASSAGE>
=======
[context document]
================
<QUESTION>
=======
[user request]
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." | How are big pharmaceutical companies doing revenue-wise this year? I'm particularly interested in how this relates to their profits. Please keep your response under 200 words. | Key takeaways
Macro worries meet AI wonderwall. Stocks have managed to climb a wall of macro worries, thanks to largely solid earnings that we believe can expand beyond AI beneficiaries and continue to support prices. As Q3 begins, we look for:
Greater dispersion as earnings growth broadens
Alpha capacity in stocks chosen ― and avoided
Fresh reason for an active bent in U.S. large caps
U.S. stocks held onto gains in the second quarter, even as concerns over stubborn inflation, strong economic data and reduced expectations for Fed rate cuts sprinkled cold water on the Q1 hot streak. Markets found support in relatively strong Q1 earnings, led primarily by a small group of high-flying mega-cap stocks. We see the earnings-growth gap between these leaders and the rest closing later this year, as shown below. This presents a compelling opportunity for stock selection, as earnings feed valuations.
While the “Magnificent 7” mega-caps were priced at roughly 34x earnings as of late May, the other 493 stocks in the S&P 500 traded at a much less demanding 17x. Yet a still-strong earnings profile means many of the top stocks aren’t necessarily expensive relative to their growth prospects. In all cases, individual analysis is key to ensuring share prices are well aligned to company fundamentals.
A narrowing gap and widening opportunity set
Consensus analyst expectations for year-over-year earnings growth, 2023-2024
Taking stock equity market outlook: Chart showing S&P 500 earnings estimates for 2024.
Source: BlackRock Fundamental Equities, with data from FactSet as of May 30, 2024. Chart shows consensus analyst estimates for year-over-year earnings per share (EPS) growth of the “Magnificent 7” mega-cap stocks in the S&P 500 Index and the remaining constituents. Past performance is not indicative of current or future results. Indexes are unmanaged. It is not possible to invest directly in an index.
Parsing a ‘stock picker’s paradise’
Macro factors (inflation, interest rates, etc.) still hold sway over daily moves at the broad index level, but we see company earnings growth as the catalyst for increased stock-level dispersion that could create mean reversion between the market’s leaders and laggards.
Notably, while we see the broad S&P 500 catching up to the Mag 7 toward the fourth quarter of this year, earnings growth looks particularly interesting for value stocks once you remove the index’s AI-supercharged top stock, which has heavily skewed the averages. Under this analysis, earnings growth for the Russell 1000 Value Index takes the lead by the third quarter.
This is not to suggest an outright preference for value stocks, though the valuation gap between value and growth is quite wide today. It does, however, indicate there is some stored upside in value stocks that investors can look to exploit.
Doing so may require looking to a style-pure value manager given that the indexes today are growth dominated. As shown below, the broad market is comprised of only 21% value names. The Russell 1000 Value Index belies its label at 57% core and growth, having experienced a 36% decline in value exposure over the past 25 years.
True value is hard to find
U.S. stock market style exposures, 2024
Taking stock equity market outlook: Chart showing the style exposures of three major U.S. market indexes.
Source: BlackRock Fundamental Equities, with data Morningstar as of April 30, 2024. Chart shows the composition, by style, of three major U.S. stock indices. Indexes are unmanaged. It is not possible to invest directly in an index.
“
A market in which earnings growth broadens beyond the prevailing leaders ― creating dispersion in the process ― is a stock picker’s paradise.
”
Alpha potential in opting out …
We often note the merits of skilled stock selection in the pursuit of alpha. And avoiding underperformers can be as important as choosing outperformers in this pursuit of benchmark-beating returns.
What are we avoiding today? Despite an overall preference for healthcare, we are skirting the big U.S. drug makers. Large-cap pharmaceutical companies face an inherent dilemma ― in other industries, products are evergreen once deployed, but pharmaceuticals have the life of a patent cycle. When those patents expire and cheaper generics come to market, revenues inevitably decline.
We see several major U.S. pharma companies losing patent protection on up to 70% of their revenue by 2030. Estimates suggest the industry could face a $100 billion drop in revenue as a result. Profits are also at risk of disproportionate decline, as it’s typically the oldest and highest-margin products that are losing patent protection.
At the same time, U.S. pharma is confronted with price pressure related to the Inflation Reduction Act (IRA), which gives Medicare the authority to begin negotiating prices on select drugs. That process is underway, with results (and potential price reductions) due in September.
The notable exception to our U.S. pharma aversion is the manufacturers of the newer GLP-1 “diabesity” drugs, which we believe are just beginning their success journey.
Several of our teams within BlackRock Fundamental Equities are also trimming positions in financials, as interest rate cuts tend to affect bank margins, and parts of the consumer sectors, where the end of pandemic-era excess savings and high inflation are beginning to show up in greater spending discipline. Credit card data reveals an uptick in delinquency rates at lower income levels, increasing loss rates for financials. In technology, we see reductions in software and services offset by buying in semiconductors, where generative AI needs are crowding out other technology spending.
… and leaning in
Beyond the buying in semiconductors, we see a platform-level bullishness around AI that is manifesting in new ways to tap into the megatrend. Our investors are finding opportunities outside of the accepted AI winners. Examples include companies that own data and those that provide memory for storing it; power companies and industrials that supply into AI infrastructure needs, including those that equip data center cooling systems; and, more recently, opportunities in AI-ready PCs that are set to be introduced this year.
Several of our active stock pickers are adding to positions in healthcare, with a preference for healthcare equipment and services. We have also been adding to communication services, including media and entertainment. Others are eyeing value in defensive areas of the market that were left behind in the cyclical rally since 2023. Utilities is one of these sectors. It is priced at a discount to the broad market and, we believe, poised for re-rating. The transition to renewable forms of energy will compel upgrades to existing power grids and push up private market electricity rates in the process, while the power required by a growing field of AI data centers is set to fuel a meaningful spike in energy demand.
Underappreciated alpha potential in U.S. large caps
In our full quarterly outlook, we challenge a long-held portfolio construction “truism” that asserts exposure to large-cap U.S. stocks is best achieved via passive index-tracking products. The argument suggests the U.S. stock market is so efficient and transparent that there is little alpha to be captured via active stock selection.
We disagree and offer analysis showing that a combination of decent excess returns from top managers plus a large U.S. representation in global indexes makes the total alpha opportunity in U.S. large caps the greatest on the global stage. And even as median managers may underperform, we see growing opportunity for skilled managers to add alpha given our outlook for greater earnings and valuation dispersion in what we have described as a new era for equity investing. | "================
<TEXT PASSAGE>
=======
Key takeaways
Macro worries meet AI wonderwall. Stocks have managed to climb a wall of macro worries, thanks to largely solid earnings that we believe can expand beyond AI beneficiaries and continue to support prices. As Q3 begins, we look for:
Greater dispersion as earnings growth broadens
Alpha capacity in stocks chosen ― and avoided
Fresh reason for an active bent in U.S. large caps
U.S. stocks held onto gains in the second quarter, even as concerns over stubborn inflation, strong economic data and reduced expectations for Fed rate cuts sprinkled cold water on the Q1 hot streak. Markets found support in relatively strong Q1 earnings, led primarily by a small group of high-flying mega-cap stocks. We see the earnings-growth gap between these leaders and the rest closing later this year, as shown below. This presents a compelling opportunity for stock selection, as earnings feed valuations.
While the “Magnificent 7” mega-caps were priced at roughly 34x earnings as of late May, the other 493 stocks in the S&P 500 traded at a much less demanding 17x. Yet a still-strong earnings profile means many of the top stocks aren’t necessarily expensive relative to their growth prospects. In all cases, individual analysis is key to ensuring share prices are well aligned to company fundamentals.
A narrowing gap and widening opportunity set
Consensus analyst expectations for year-over-year earnings growth, 2023-2024
Taking stock equity market outlook: Chart showing S&P 500 earnings estimates for 2024.
Source: BlackRock Fundamental Equities, with data from FactSet as of May 30, 2024. Chart shows consensus analyst estimates for year-over-year earnings per share (EPS) growth of the “Magnificent 7” mega-cap stocks in the S&P 500 Index and the remaining constituents. Past performance is not indicative of current or future results. Indexes are unmanaged. It is not possible to invest directly in an index.
Parsing a ‘stock picker’s paradise’
Macro factors (inflation, interest rates, etc.) still hold sway over daily moves at the broad index level, but we see company earnings growth as the catalyst for increased stock-level dispersion that could create mean reversion between the market’s leaders and laggards.
Notably, while we see the broad S&P 500 catching up to the Mag 7 toward the fourth quarter of this year, earnings growth looks particularly interesting for value stocks once you remove the index’s AI-supercharged top stock, which has heavily skewed the averages. Under this analysis, earnings growth for the Russell 1000 Value Index takes the lead by the third quarter.
This is not to suggest an outright preference for value stocks, though the valuation gap between value and growth is quite wide today. It does, however, indicate there is some stored upside in value stocks that investors can look to exploit.
Doing so may require looking to a style-pure value manager given that the indexes today are growth dominated. As shown below, the broad market is comprised of only 21% value names. The Russell 1000 Value Index belies its label at 57% core and growth, having experienced a 36% decline in value exposure over the past 25 years.
True value is hard to find
U.S. stock market style exposures, 2024
Taking stock equity market outlook: Chart showing the style exposures of three major U.S. market indexes.
Source: BlackRock Fundamental Equities, with data Morningstar as of April 30, 2024. Chart shows the composition, by style, of three major U.S. stock indices. Indexes are unmanaged. It is not possible to invest directly in an index.
“
A market in which earnings growth broadens beyond the prevailing leaders ― creating dispersion in the process ― is a stock picker’s paradise.
”
Alpha potential in opting out …
We often note the merits of skilled stock selection in the pursuit of alpha. And avoiding underperformers can be as important as choosing outperformers in this pursuit of benchmark-beating returns.
What are we avoiding today? Despite an overall preference for healthcare, we are skirting the big U.S. drug makers. Large-cap pharmaceutical companies face an inherent dilemma ― in other industries, products are evergreen once deployed, but pharmaceuticals have the life of a patent cycle. When those patents expire and cheaper generics come to market, revenues inevitably decline.
We see several major U.S. pharma companies losing patent protection on up to 70% of their revenue by 2030. Estimates suggest the industry could face a $100 billion drop in revenue as a result. Profits are also at risk of disproportionate decline, as it’s typically the oldest and highest-margin products that are losing patent protection.
At the same time, U.S. pharma is confronted with price pressure related to the Inflation Reduction Act (IRA), which gives Medicare the authority to begin negotiating prices on select drugs. That process is underway, with results (and potential price reductions) due in September.
The notable exception to our U.S. pharma aversion is the manufacturers of the newer GLP-1 “diabesity” drugs, which we believe are just beginning their success journey.
Several of our teams within BlackRock Fundamental Equities are also trimming positions in financials, as interest rate cuts tend to affect bank margins, and parts of the consumer sectors, where the end of pandemic-era excess savings and high inflation are beginning to show up in greater spending discipline. Credit card data reveals an uptick in delinquency rates at lower income levels, increasing loss rates for financials. In technology, we see reductions in software and services offset by buying in semiconductors, where generative AI needs are crowding out other technology spending.
… and leaning in
Beyond the buying in semiconductors, we see a platform-level bullishness around AI that is manifesting in new ways to tap into the megatrend. Our investors are finding opportunities outside of the accepted AI winners. Examples include companies that own data and those that provide memory for storing it; power companies and industrials that supply into AI infrastructure needs, including those that equip data center cooling systems; and, more recently, opportunities in AI-ready PCs that are set to be introduced this year.
Several of our active stock pickers are adding to positions in healthcare, with a preference for healthcare equipment and services. We have also been adding to communication services, including media and entertainment. Others are eyeing value in defensive areas of the market that were left behind in the cyclical rally since 2023. Utilities is one of these sectors. It is priced at a discount to the broad market and, we believe, poised for re-rating. The transition to renewable forms of energy will compel upgrades to existing power grids and push up private market electricity rates in the process, while the power required by a growing field of AI data centers is set to fuel a meaningful spike in energy demand.
Underappreciated alpha potential in U.S. large caps
In our full quarterly outlook, we challenge a long-held portfolio construction “truism” that asserts exposure to large-cap U.S. stocks is best achieved via passive index-tracking products. The argument suggests the U.S. stock market is so efficient and transparent that there is little alpha to be captured via active stock selection.
We disagree and offer analysis showing that a combination of decent excess returns from top managers plus a large U.S. representation in global indexes makes the total alpha opportunity in U.S. large caps the greatest on the global stage. And even as median managers may underperform, we see growing opportunity for skilled managers to add alpha given our outlook for greater earnings and valuation dispersion in what we have described as a new era for equity investing.
https://www.blackrock.com/us/individual/insights/taking-stock-quarterly-outlook
================
<QUESTION>
=======
How are big pharmaceutical companies doing revenue-wise this year? I'm particularly interested in how this relates to their profits. Please keep your response under 200 words.
================
<TASK>
=======
You are an expert in question answering. Your task is to reply to a query or question, based only on the information provided by the user. It should only use information in the article provided." |
Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
[user request]
[context document] | Assess how quantum computing will impact existing cryptographic protocols as well as recommend ways the effectiveness of the blockchain system can be guaranteed in the post-quantum world? Analyze the prospects of the new quantum-resistant algorithms; Review the scalability challenges of QKD and explain the policy implications on natural security | The Challenges of Quantum Computing in
Cryptography
While quantum computing offers many benefits, it also
presents several challenges. One of the most significant
challenges is the threat of quantum attacks on current
encryption algorithms. As mentioned earlier, Shor's
algorithm can break RSA encryption, which is widely used
to secure data. Any data encrypted using RSA encryption is
vulnerable to quantum attacks [9].
1. Error Correction: The effects of noise and
decoherence on quantum computers make them
extremely prone to mistakes. Implementing trustworthy
quantum cryptography systems to overcome these
mistakes can be extremely difficult.
2. Scalability: Because quantum computing is still in its
infancy, the number of qubits that present quantum
computing systems can support is constrained. Due to
this, scaling quantum cryptography systems to handle
bigger data volumes and applications is challenging.
Another challenge is developing new encryption
algorithms resistant to quantum attacks. This is because
current encryption algorithms that are secure against
classical attacks may not be secure against quantum attacks.
Therefore, researchers are actively developing new
quantum-resistant encryption algorithms that can be used in
the post-quantum era.
D. Policy Implications of Quantum Computing in
Cryptography
As quantum computing advances, policymakers must
carefully consider the national security and critical
infrastructure implications. Encryption algorithms are
essential for securing military communications, financial
transactions, and government data, making it crucial to
assess the impact of quantum computing on current
encryption standards and develop strategies to address any
vulnerabilities.
The National Institute of Standards and Technology
(NIST) has initiated a standardization process for post-
quantum cryptography to address this issue. The goal is to
create a portfolio of quantum-resistant algorithms that can be
widely implemented in the coming years. This process
involves a public competition in which researchers submit
their proposed encryption algorithms and undergo a rigorous
evaluation. NIST will then select the most promising
algorithms for standardization.
In addition to the need for quantum-resistant encryption,
policymakers must also consider the potential for quantum
computing to be used for offensive purposes. A quantum
computer could break into secure systems and access
sensitive data, which could be detrimental to national
security. Therefore, governments must establish policies and
regulations to prevent the misuse of quantum computing
technology and safeguard against potential threats.
E. Social Implications of Quantum Computing in
Cryptography
The impact of quantum computing on cryptography goes
beyond policy and security concerns. It also has social
implications that must be considered. For example, if
quantum computing can break current encryption
algorithms, it could significantly impact individual privacy.
Personal information such as medical records, financial data,
and online communications could be compromised.
Furthermore, developing new quantum-resistant
encryption algorithms will require significant investment
and research, which could limit access to these technologies.
This could widen the digital divide and create disparities in
access to secure communication channels, particularly for
marginalized communities.
In conclusion, quantum computing has the potential to
revolutionize cryptography, but it also presents significant
challenges. While new cryptographic techniques such as
QKD and quantum signature schemes can enhance security,
quantum computing can also be used for offensive purposes.
Therefore, policymakers and researchers must collaborate to
address these challenges and ensure critical infrastructure
security and individual privacy in the post-quantum era.
III. CRYPTOGRAPHY AND QUANTUM COMPUTING
A. Quantum Key Distribution
QKD is a method of securely sharing keys between two
parties based on the principles of quantum mechanics. This
approach takes advantage of the fact that any attempt to
intercept the keys will introduce detectable errors. QKD is
considered an unconditionally secure key distribution
method, making it ideal for sensitive applications. Although
QKD is still in the experimental stage, it has shown
promising results and is being studied extensively by
researchers worldwide.
One of the challenges in implementing QKD is the issue
of scalability. Current QKD systems are limited regarding
the distance they can distribute keys and the number of users
they can support [11]. Researchers are exploring new
technologies such as quantum repeaters, quantum memories,
and quantum routers to overcome this challenge. These
technologies will enable the distribution of keys over longer
distances and the support of more users, making QKD a
viable option for a wide range of applications.
B. Quantum-Resistant Cryptography
Quantum-resistant cryptography refers to cryptographic
techniques designed to be secure against attacks by quantum
computers. Quantum computers pose a threat to current
cryptographic algorithms like RSA and Elliptic Curve
3
Cryptography (ECC). Hence, new cryptographic techniques
that can resist quantum attacks are necessary. Lattice-based
cryptography is one of the most promising candidates for
post-quantum cryptography and is under extensive research
[11]. Code-based cryptography is another well-established
approach that has been around for a while. These approaches
are believed to provide high security against quantum
attacks.
However, one of the challenges in developing post-
quantum cryptographic algorithms is ensuring that they are
efficient and practical for real-world applications. Many of
the current post-quantum cryptographic algorithms are
computationally intensive, which could make them
impractical for use in resource-constrained environments
like mobile devices and the Internet of Things (IoT) [13]. To
address this challenge, researchers are exploring new
approaches to post-quantum cryptography that are efficient
and practical while maintaining the security of sensitive
information.
C. Cryptographic Protocols for Quantum Computing
Cryptographic protocols use cryptographic techniques to
secure quantum computing systems. These protocols are
designed to protect quantum computers from attacks, prevent
the tampering of quantum information, and ensure the
integrity of quantum cryptographic keys [2]. Examples of
cryptographic protocols for quantum computing include
quantum secret sharing [12], quantum oblivious transfer, and
quantum homomorphic encryption. These protocols are
essential for the secure operation of quantum computing
systems, and they are being studied extensively by
researchers worldwide.
Another challenge in developing cryptographic protocols
for quantum computing is ensuring they resist attacks by
quantum computers. Many current cryptographic protocols
are vulnerable to attacks by quantum computers, which
could compromise the security of quantum information [2].
To address this issue, researchers are developing new
cryptographic protocols resistant to attacks by quantum
computers, ensuring that quantum computing systems
remain secure. These new cryptographic protocols are being
studied extensively by researchers worldwide and can
potentially revolutionize how we secure information in the
quantum computing era.
D. Quantum Cryptography Standards
Quantum cryptography standards refer to guidelines
defining the requirements for implementing quantum
cryptography. These standards ensure that quantum
cryptography is implemented securely, reliably, and
efficiently [6]. There are several standards for quantum
cryptography, including the European Telecommunications
Standards Institute (ETSI) and the National Institute of
Standards and Technology (NIST) [7]. The previous two
organizations work consistently on quantum cryptography
standards, as they are developing guidelines and
recommendations for implementing quantum cryptography,
including post-quantum cryptographic algorithms [8].
Developing quantum cryptography standards will facilitate
the adoption of quantum cryptography and ensure that it is
implemented securely and efficiently [6].
The development of quantum cryptography standards is
essential for the widespread adoption of quantum
cryptographic systems. It will ensure that these systems are
interoperable and compatible with existing cryptographic
protocols. Measures will also help establish trust in quantum
cryptographic systems by providing a framework for
evaluating and certifying these systems. However,
developing standards for quantum cryptography is a
complex and challenging task. It requires the collaboration
of experts from various fields, including quantum physics,
computer science, cryptography and standards development.
As quantum cryptographic systems continue to evolve and
become more sophisticated, the development of standards
will become increasingly important to ensure their security
and reliability.
E. Quantum Computing and Blockchain
Quantum computing has the potential to disrupt the
security of blockchain systems. Blockchain is a
decentralized, tamper-proof database that records
transactions securely and transparently. However, the
security of blockchain systems depends on the underlying
cryptographic algorithms, which are vulnerable to attacks by
quantum computers. To address this issue, researchers are
developing post-quantum cryptographic algorithms that can
be used to secure blockchain systems. | Answer the question based solely on the information provided in the passage. Do not use any external knowledge or resources.
Assess how quantum computing will impact existing cryptographic protocols as well as recommend ways the effectiveness of the blockchain system can be guaranteed in the post-quantum world? Analyze the prospects of the new quantum-resistant algorithms; Review the scalability challenges of QKD and explain the policy implications on natural security
The Challenges of Quantum Computing in
Cryptography
While quantum computing offers many benefits, it also
presents several challenges. One of the most significant
challenges is the threat of quantum attacks on current
encryption algorithms. As mentioned earlier, Shor's
algorithm can break RSA encryption, which is widely used
to secure data. Any data encrypted using RSA encryption is
vulnerable to quantum attacks [9].
1. Error Correction: The effects of noise and
decoherence on quantum computers make them
extremely prone to mistakes. Implementing trustworthy
quantum cryptography systems to overcome these
mistakes can be extremely difficult.
2. Scalability: Because quantum computing is still in its
infancy, the number of qubits that present quantum
computing systems can support is constrained. Due to
this, scaling quantum cryptography systems to handle
bigger data volumes and applications is challenging.
Another challenge is developing new encryption
algorithms resistant to quantum attacks. This is because
current encryption algorithms that are secure against
classical attacks may not be secure against quantum attacks.
Therefore, researchers are actively developing new
quantum-resistant encryption algorithms that can be used in
the post-quantum era.
D. Policy Implications of Quantum Computing in
Cryptography
As quantum computing advances, policymakers must
carefully consider the national security and critical
infrastructure implications. Encryption algorithms are
essential for securing military communications, financial
transactions, and government data, making it crucial to
assess the impact of quantum computing on current
encryption standards and develop strategies to address any
vulnerabilities.
The National Institute of Standards and Technology
(NIST) has initiated a standardization process for post-
quantum cryptography to address this issue. The goal is to
create a portfolio of quantum-resistant algorithms that can be
widely implemented in the coming years. This process
involves a public competition in which researchers submit
their proposed encryption algorithms and undergo a rigorous
evaluation. NIST will then select the most promising
algorithms for standardization.
In addition to the need for quantum-resistant encryption,
policymakers must also consider the potential for quantum
computing to be used for offensive purposes. A quantum
computer could break into secure systems and access
sensitive data, which could be detrimental to national
security. Therefore, governments must establish policies and
regulations to prevent the misuse of quantum computing
technology and safeguard against potential threats.
E. Social Implications of Quantum Computing in
Cryptography
The impact of quantum computing on cryptography goes
beyond policy and security concerns. It also has social
implications that must be considered. For example, if
quantum computing can break current encryption
algorithms, it could significantly impact individual privacy.
Personal information such as medical records, financial data,
and online communications could be compromised.
Furthermore, developing new quantum-resistant
encryption algorithms will require significant investment
and research, which could limit access to these technologies.
This could widen the digital divide and create disparities in
access to secure communication channels, particularly for
marginalized communities.
In conclusion, quantum computing has the potential to
revolutionize cryptography, but it also presents significant
challenges. While new cryptographic techniques such as
QKD and quantum signature schemes can enhance security,
quantum computing can also be used for offensive purposes.
Therefore, policymakers and researchers must collaborate to
address these challenges and ensure critical infrastructure
security and individual privacy in the post-quantum era.
III. CRYPTOGRAPHY AND QUANTUM COMPUTING
A. Quantum Key Distribution
QKD is a method of securely sharing keys between two
parties based on the principles of quantum mechanics. This
approach takes advantage of the fact that any attempt to
intercept the keys will introduce detectable errors. QKD is
considered an unconditionally secure key distribution
method, making it ideal for sensitive applications. Although
QKD is still in the experimental stage, it has shown
promising results and is being studied extensively by
researchers worldwide.
One of the challenges in implementing QKD is the issue
of scalability. Current QKD systems are limited regarding
the distance they can distribute keys and the number of users
they can support [11]. Researchers are exploring new
technologies such as quantum repeaters, quantum memories,
and quantum routers to overcome this challenge. These
technologies will enable the distribution of keys over longer
distances and the support of more users, making QKD a
viable option for a wide range of applications.
B. Quantum-Resistant Cryptography
Quantum-resistant cryptography refers to cryptographic
techniques designed to be secure against attacks by quantum
computers. Quantum computers pose a threat to current
cryptographic algorithms like RSA and Elliptic Curve
3
Cryptography (ECC). Hence, new cryptographic techniques
that can resist quantum attacks are necessary. Lattice-based
cryptography is one of the most promising candidates for
post-quantum cryptography and is under extensive research
[11]. Code-based cryptography is another well-established
approach that has been around for a while. These approaches
are believed to provide high security against quantum
attacks.
However, one of the challenges in developing post-
quantum cryptographic algorithms is ensuring that they are
efficient and practical for real-world applications. Many of
the current post-quantum cryptographic algorithms are
computationally intensive, which could make them
impractical for use in resource-constrained environments
like mobile devices and the Internet of Things (IoT) [13]. To
address this challenge, researchers are exploring new
approaches to post-quantum cryptography that are efficient
and practical while maintaining the security of sensitive
information.
C. Cryptographic Protocols for Quantum Computing
Cryptographic protocols use cryptographic techniques to
secure quantum computing systems. These protocols are
designed to protect quantum computers from attacks, prevent
the tampering of quantum information, and ensure the
integrity of quantum cryptographic keys [2]. Examples of
cryptographic protocols for quantum computing include
quantum secret sharing [12], quantum oblivious transfer, and
quantum homomorphic encryption. These protocols are
essential for the secure operation of quantum computing
systems, and they are being studied extensively by
researchers worldwide.
Another challenge in developing cryptographic protocols
for quantum computing is ensuring they resist attacks by
quantum computers. Many current cryptographic protocols
are vulnerable to attacks by quantum computers, which
could compromise the security of quantum information [2].
To address this issue, researchers are developing new
cryptographic protocols resistant to attacks by quantum
computers, ensuring that quantum computing systems
remain secure. These new cryptographic protocols are being
studied extensively by researchers worldwide and can
potentially revolutionize how we secure information in the
quantum computing era.
D. Quantum Cryptography Standards
Quantum cryptography standards refer to guidelines
defining the requirements for implementing quantum
cryptography. These standards ensure that quantum
cryptography is implemented securely, reliably, and
efficiently [6]. There are several standards for quantum
cryptography, including the European Telecommunications
Standards Institute (ETSI) and the National Institute of
Standards and Technology (NIST) [7]. The previous two
organizations work consistently on quantum cryptography
standards, as they are developing guidelines and
recommendations for implementing quantum cryptography,
including post-quantum cryptographic algorithms [8].
Developing quantum cryptography standards will facilitate
the adoption of quantum cryptography and ensure that it is
implemented securely and efficiently [6].
The development of quantum cryptography standards is
essential for the widespread adoption of quantum
cryptographic systems. It will ensure that these systems are
interoperable and compatible with existing cryptographic
protocols. Measures will also help establish trust in quantum
cryptographic systems by providing a framework for
evaluating and certifying these systems. However,
developing standards for quantum cryptography is a
complex and challenging task. It requires the collaboration
of experts from various fields, including quantum physics,
computer science, cryptography and standards development.
As quantum cryptographic systems continue to evolve and
become more sophisticated, the development of standards
will become increasingly important to ensure their security
and reliability.
E. Quantum Computing and Blockchain
Quantum computing has the potential to disrupt the
security of blockchain systems. Blockchain is a
decentralized, tamper-proof database that records
transactions securely and transparently. However, the
security of blockchain systems depends on the underlying
cryptographic algorithms, which are vulnerable to attacks by
quantum computers. To address this issue, researchers are
developing post-quantum cryptographic algorithms that can
be used to secure blockchain systems.
https://american-cse.org/csci2023-ieee/pdfs/CSCI2023-47UoKEqjHou6fHnm3C9aVb/615100a490/615100a490.pdf |
Write the answer in one paragraph, using full sentences. Use only the document provided. Use language that is easy to understand. | what are the pros of plastic and plastic bottle use? | We have all seen the photos: birds nesting in piles of garbage along the shore, fish fatally caught in discarded netting, and huge mosaics of debris floating in the ocean. Even more alarmingly, what we see in these poignant images is only a portion of the problem. Approximately half of all plastic pollution is submerged below the ocean surface, much of it in the form of microplastics so small that we may never be able to clean them up completely.
To cut through the enormity of the ocean pollution crisis, one approach is to focus on something recognizable within these images of debris. Identify something you personally have used that may have ended up in the ocean—a water bottle perhaps. Find one in an image and ask yourself, how did it get there?
Plastic is a human-made, synthetic material that was first discovered more than one hundred years ago but did not broadly enter the public sphere until the 1950s. While currently a major culprit in ocean pollution, plastics are not inherently bad for humans or the environment. In fact, in a United Nations (UN) report on combatting the negative effects of plastics, the head of the UN Environment Programme Erik Solheim made a point to acknowledge that plastic is in fact a “miracle material.â€
“Thanks to plastics, countless lives have been saved in the health sector, the growth of clean energy from wind turbines and solar panels has been greatly facilitated, and safe food storage has been revolutionized,†Solheim wrote in his introduction. Yet plastic bottles are one of the most common items within marine debris. So how did such a promising material become a symbol of human environmental desecration?
Plastic bottles are a single-use plastic, a product designed to be used only once and then discarded. Single-use plastics also include plastic packaging, for example of meats and fresh produce, which accounts for almost half of all plastic pollution. This type of plastic product is distinct from multi-use plastics, which can also pollute the ocean, but tend to amass less frequently due to their multi-use nature.
For example, refillable bottles can store water in a way that does not produce the repeated waste of a single-use plastic water bottle. Refillable bottles can be made of many materials, including plastic, but last much longer than a single-use bottle and can be recycled when they become old or damaged. For both types of bottles, how they are discarded determines their ultimate resting place and whether they become pollutants of the ocean.
A single-use plastic water bottle was manufactured, filled with water, and likely transported to a store, where it sat on a shelf waiting for a thirsty purchaser. Many of us drink out of plastic bottles several times during an average day, week, or month. Once we are finished with it, we have a choice where we leave that bottle:
Recycling bin: Bottles destined for recycling are unlikely to end up in the ocean, in their current form, unless they are mismanaged or lost in transit to a processing facility. However, due to recent limitations in how recyclables are internationally transferred and accepted for processing, many of these bottles will unfortunately end up in landfills rather than recycling facilities.
Trash can: These bottles also will not likely end up, in their current form, in the ocean. However, in areas across the globe with poor waste management or a lack of properly sealed landfills, as a bottle breaks down into microplastic particles over time, some particles may seep into the soil and eventually make their way into our waterways, ultimately entering and polluting the ocean.
Litter: These bottles may very well be carried by wind, storm water, or other processes to sewers, rivers, lakes, and other waterways that may ultimately deposit the bottle in the ocean.
Multi-use plastic bottles face these same pathways at end of their life—but of course this happens much less frequently since they can be used many times.
National Geographic Explorer Heather J. Koldewey works to empower communities around the world to participate in solving the ocean pollution crisis from single-use plastics via incremental individual actions—including a campaign called One Less, which encourages people to stop using single-use plastic water bottles altogether. One Less is currently based in and focused on London, England and its inhabitants, but anyone can make the choice to use one less single-use bottle. | Write the answer in one paragraph, using full sentences. Use only the document provided. Use language that is easy to understand.
what are the pros of plastic and plastic bottle use?
We have all seen the photos: birds nesting in piles of garbage along the shore, fish fatally caught in discarded netting, and huge mosaics of debris floating in the ocean. Even more alarmingly, what we see in these poignant images is only a portion of the problem. Approximately half of all plastic pollution is submerged below the ocean surface, much of it in the form of microplastics so small that we may never be able to clean them up completely.
To cut through the enormity of the ocean pollution crisis, one approach is to focus on something recognizable within these images of debris. Identify something you personally have used that may have ended up in the ocean—a water bottle perhaps. Find one in an image and ask yourself, how did it get there?
Plastic is a human-made, synthetic material that was first discovered more than one hundred years ago but did not broadly enter the public sphere until the 1950s. While currently a major culprit in ocean pollution, plastics are not inherently bad for humans or the environment. In fact, in a United Nations (UN) report on combatting the negative effects of plastics, the head of the UN Environment Programme Erik Solheim made a point to acknowledge that plastic is in fact a “miracle material.â€
“Thanks to plastics, countless lives have been saved in the health sector, the growth of clean energy from wind turbines and solar panels has been greatly facilitated, and safe food storage has been revolutionized,†Solheim wrote in his introduction. Yet plastic bottles are one of the most common items within marine debris. So how did such a promising material become a symbol of human environmental desecration?
Plastic bottles are a single-use plastic, a product designed to be used only once and then discarded. Single-use plastics also include plastic packaging, for example of meats and fresh produce, which accounts for almost half of all plastic pollution. This type of plastic product is distinct from multi-use plastics, which can also pollute the ocean, but tend to amass less frequently due to their multi-use nature.
For example, refillable bottles can store water in a way that does not produce the repeated waste of a single-use plastic water bottle. Refillable bottles can be made of many materials, including plastic, but last much longer than a single-use bottle and can be recycled when they become old or damaged. For both types of bottles, how they are discarded determines their ultimate resting place and whether they become pollutants of the ocean.
A single-use plastic water bottle was manufactured, filled with water, and likely transported to a store, where it sat on a shelf waiting for a thirsty purchaser. Many of us drink out of plastic bottles several times during an average day, week, or month. Once we are finished with it, we have a choice where we leave that bottle:
Recycling bin: Bottles destined for recycling are unlikely to end up in the ocean, in their current form, unless they are mismanaged or lost in transit to a processing facility. However, due to recent limitations in how recyclables are internationally transferred and accepted for processing, many of these bottles will unfortunately end up in landfills rather than recycling facilities.
Trash can: These bottles also will not likely end up, in their current form, in the ocean. However, in areas across the globe with poor waste management or a lack of properly sealed landfills, as a bottle breaks down into microplastic particles over time, some particles may seep into the soil and eventually make their way into our waterways, ultimately entering and polluting the ocean.
Litter: These bottles may very well be carried by wind, storm water, or other processes to sewers, rivers, lakes, and other waterways that may ultimately deposit the bottle in the ocean.
Multi-use plastic bottles face these same pathways at end of their life—but of course this happens much less frequently since they can be used many times.
National Geographic Explorer Heather J. Koldewey works to empower communities around the world to participate in solving the ocean pollution crisis from single-use plastics via incremental individual actions—including a campaign called One Less, which encourages people to stop using single-use plastic water bottles altogether. One Less is currently based in and focused on London, England and its inhabitants, but anyone can make the choice to use one less single-use bottle. |
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | Given the inherent instability of qubits and the challenges in maintaining quantum coherence, what theoretical advancements or technological breakthroughs would be necessary to achieve fault-tolerant quantum computing on a large scale? | Quantum Computing: What It Is, Why We Want It, and How We're Trying to Get It
Quantum mechanics emerged as a branch of physics in the early 1900s to explain nature on the scale of atoms and led to advances such as transistors, lasers, and magnetic resonance imaging. The idea to merge quantum mechanics and information theory arose in the 1970s but garnered little attention until 1982, when physicist Richard Feynman gave a talk in which he reasoned that computing based on classical logic could not tractably process calculations describing quantum phenomena. Computing based on quantum phenomena configured to simulate other quantum phenomena, however, would not be subject to the same bottlenecks. Although this application eventually became the field of quantum simulation, it didn't spark much research activity at the time.
In 1994, however, interest in quantum computing rose dramatically when mathematician Peter Shor developed a quantum algorithm, which could find the prime factors of large numbers efficiently. Here, “efficiently” means in a time of practical relevance, which is beyond the capability of state-of-the-art classical algorithms. Although this may seem simply like an oddity, it is impossible to overstate the importance of Shor's insight. The security of nearly every online transaction today relies on an RSA cryptosystem that hinges on the intractability of the factoring problem to classical algorithms.
WHAT IS QUANTUM COMPUTING?
Quantum and classical computers both try to solve problems, but the way they manipulate data to get answers is fundamentally different. This section provides an explanation of what makes quantum computers unique by introducing two principles of quantum mechanics crucial for their operation, superposition and entanglement.
Superposition is the counterintuitive ability of a quantum object, like an electron, to simultaneously exist in multiple “states.” With an electron, one of these states may be the lowest energy level in an atom while another may be the first excited level. If an electron is prepared in a superposition of these two states it has some probability of being in the lower state and some probability of being in the upper. A measurement will destroy this superposition, and only then can it be said that it is in the lower or upper state.
Understanding superposition makes it possible to understand the basic component of information in quantum computing, the qubit. In classical computing, bits are transistors that can be off or on, corresponding to the states 0 and 1. In qubits such as electrons, 0 and 1 simply correspond to states like the lower and upper energy levels discussed above. Qubits are distinguished from classical bits, which must always be in the 0 or 1 state, by their ability to be in superpositions with varying probabilities that can be manipulated by quantum operations during computations.
Entanglement is a phenomenon in which quantum entities are created and/or manipulated such that none of them can be described without referencing the others. Individual identities are lost. This concept is exceedingly difficult to conceptualize when one considers how entanglement can persist over long distances. A measurement on one member of an entangled pair will immediately determine measurements on its partner, making it appear as if information can travel faster than the speed of light. This apparent action at a distance was so disturbing that even Einstein dubbed it “spooky” (Born 1971, p. 158).
The popular press often writes that quantum computers obtain their speedup by trying every possible answer to a problem in parallel. In reality a quantum computer leverages entanglement between qubits and the probabilities associated with superpositions to carry out a series of operations (a quantum algorithm) such that certain probabilities are enhanced (i.e., those of the right answers) and others depressed, even to zero (i.e., those of the wrong answers). When a measurement is made at the end of a computation, the probability of measuring the correct answer should be maximized. The way quantum computers leverage probabilities and entanglement is what makes them so different from classical computers.
WHY DO WE WANT IT?
The promise of developing a quantum computer sophisticated enough to execute Shor's algorithm for large numbers has been a primary motivator for advancing the field of quantum computation. To develop a broader view of quantum computers, however, it is important to understand that they will likely deliver tremendous speed-ups for only specific types of problems. Researchers are working to both understand which problems are suited for quantum speed-ups and develop algorithms to demonstrate them. In general, it is believed that quantum computers will help immensely with problems related to optimization, which play key roles in everything from defense to financial trading.
Multiple additional applications for qubit systems that are not related to computing or simulation also exist and are active areas of research, but they are beyond the scope of this overview. Two of the most prominent areas are (1) quantum sensing and metrology, which leverage the extreme sensitivity of qubits to the environment to realize sensing beyond the classical shot noise limit, and (2) quantum networks and communications, which may lead to revolutionary ways to share information.
HOW ARE WE TRYING TO GET IT?
Building quantum computers is incredibly difficult. Many candidate qubit systems exist on the scale of single atoms, and the physicists, engineers, and materials scientists who are trying to execute quantum operations on these systems constantly deal with two competing requirements. First, qubits need to be protected from the environment because it can destroy the delicate quantum states needed for computation. The longer a qubit survives in its desired state the longer its “coherence time.” From this perspective, isolation is prized. Second, however, for algorithm execution qubits need to be entangled, shuffled around physical architectures, and controllable on demand. The better these operations can be carried out the higher their “fidelity.” Balancing the required isolation and interaction is difficult, but after decades of research a few systems are emerging as top candidates for large-scale quantum information processing.
Superconducting systems, trapped atomic ions, and semiconductors are some of the leading platforms for building a quantum computer. Each has advantages and disadvantages related to coherence, fidelity, and ultimate scalability to large systems. It is clear, however, that all of these platforms will need some type of error correction protocols to be robust enough to carry out meaningful calculations, and how to design and implement these protocols is itself a large area of research. For an overview of quantum computing, with more detail regarding experimental implementations, see Ladd et al. (2010).
In this article, “quantum computing” has so far been used as a blanket term describing all computations that utilize quantum phenomena. There are actually multiple types of operational frameworks. Logical, gate-based quantum computing is probably the best recognized. In it, qubits are prepared in initial states and then subject to a series of “gate operations,” like current or laser pulses depending on qubit type. Through these gates the qubits are put in superpositions, entangled, and subjected to logic operations like the AND, OR, and NOT gates of traditional computation. The qubits are then measured and a result obtained.
Another framework is measurement-based computation, in which highly entangled qubits serve as the starting point. Then, instead of performing manipulation operations on qubits, single qubit measurements are performed, leaving the targeted single qubit in a definitive state. Based on the result, further measurements are carried out on other qubits and eventually an answer is reached.
A third framework is topological computation, in which qubits and operations are based on quasiparticles and their braiding operations. While nascent implementations of the components of topological quantum computers have yet to be demonstrated, the approach is attractive because these systems are theoretically protected against noise, which destroys the coherence of other qubits. | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
Given the inherent instability of qubits and the challenges in maintaining quantum coherence, what theoretical advancements or technological breakthroughs would be necessary to achieve fault-tolerant quantum computing on a large scale?
<TEXT>
Quantum Computing: What It Is, Why We Want It, and How We're Trying to Get It
Quantum mechanics emerged as a branch of physics in the early 1900s to explain nature on the scale of atoms and led to advances such as transistors, lasers, and magnetic resonance imaging. The idea to merge quantum mechanics and information theory arose in the 1970s but garnered little attention until 1982, when physicist Richard Feynman gave a talk in which he reasoned that computing based on classical logic could not tractably process calculations describing quantum phenomena. Computing based on quantum phenomena configured to simulate other quantum phenomena, however, would not be subject to the same bottlenecks. Although this application eventually became the field of quantum simulation, it didn't spark much research activity at the time.
In 1994, however, interest in quantum computing rose dramatically when mathematician Peter Shor developed a quantum algorithm, which could find the prime factors of large numbers efficiently. Here, “efficiently” means in a time of practical relevance, which is beyond the capability of state-of-the-art classical algorithms. Although this may seem simply like an oddity, it is impossible to overstate the importance of Shor's insight. The security of nearly every online transaction today relies on an RSA cryptosystem that hinges on the intractability of the factoring problem to classical algorithms.
WHAT IS QUANTUM COMPUTING?
Quantum and classical computers both try to solve problems, but the way they manipulate data to get answers is fundamentally different. This section provides an explanation of what makes quantum computers unique by introducing two principles of quantum mechanics crucial for their operation, superposition and entanglement.
Superposition is the counterintuitive ability of a quantum object, like an electron, to simultaneously exist in multiple “states.” With an electron, one of these states may be the lowest energy level in an atom while another may be the first excited level. If an electron is prepared in a superposition of these two states it has some probability of being in the lower state and some probability of being in the upper. A measurement will destroy this superposition, and only then can it be said that it is in the lower or upper state.
Understanding superposition makes it possible to understand the basic component of information in quantum computing, the qubit. In classical computing, bits are transistors that can be off or on, corresponding to the states 0 and 1. In qubits such as electrons, 0 and 1 simply correspond to states like the lower and upper energy levels discussed above. Qubits are distinguished from classical bits, which must always be in the 0 or 1 state, by their ability to be in superpositions with varying probabilities that can be manipulated by quantum operations during computations.
Entanglement is a phenomenon in which quantum entities are created and/or manipulated such that none of them can be described without referencing the others. Individual identities are lost. This concept is exceedingly difficult to conceptualize when one considers how entanglement can persist over long distances. A measurement on one member of an entangled pair will immediately determine measurements on its partner, making it appear as if information can travel faster than the speed of light. This apparent action at a distance was so disturbing that even Einstein dubbed it “spooky” (Born 1971, p. 158).
The popular press often writes that quantum computers obtain their speedup by trying every possible answer to a problem in parallel. In reality a quantum computer leverages entanglement between qubits and the probabilities associated with superpositions to carry out a series of operations (a quantum algorithm) such that certain probabilities are enhanced (i.e., those of the right answers) and others depressed, even to zero (i.e., those of the wrong answers). When a measurement is made at the end of a computation, the probability of measuring the correct answer should be maximized. The way quantum computers leverage probabilities and entanglement is what makes them so different from classical computers.
WHY DO WE WANT IT?
The promise of developing a quantum computer sophisticated enough to execute Shor's algorithm for large numbers has been a primary motivator for advancing the field of quantum computation. To develop a broader view of quantum computers, however, it is important to understand that they will likely deliver tremendous speed-ups for only specific types of problems. Researchers are working to both understand which problems are suited for quantum speed-ups and develop algorithms to demonstrate them. In general, it is believed that quantum computers will help immensely with problems related to optimization, which play key roles in everything from defense to financial trading.
Multiple additional applications for qubit systems that are not related to computing or simulation also exist and are active areas of research, but they are beyond the scope of this overview. Two of the most prominent areas are (1) quantum sensing and metrology, which leverage the extreme sensitivity of qubits to the environment to realize sensing beyond the classical shot noise limit, and (2) quantum networks and communications, which may lead to revolutionary ways to share information.
HOW ARE WE TRYING TO GET IT?
Building quantum computers is incredibly difficult. Many candidate qubit systems exist on the scale of single atoms, and the physicists, engineers, and materials scientists who are trying to execute quantum operations on these systems constantly deal with two competing requirements. First, qubits need to be protected from the environment because it can destroy the delicate quantum states needed for computation. The longer a qubit survives in its desired state the longer its “coherence time.” From this perspective, isolation is prized. Second, however, for algorithm execution qubits need to be entangled, shuffled around physical architectures, and controllable on demand. The better these operations can be carried out the higher their “fidelity.” Balancing the required isolation and interaction is difficult, but after decades of research a few systems are emerging as top candidates for large-scale quantum information processing.
Superconducting systems, trapped atomic ions, and semiconductors are some of the leading platforms for building a quantum computer. Each has advantages and disadvantages related to coherence, fidelity, and ultimate scalability to large systems. It is clear, however, that all of these platforms will need some type of error correction protocols to be robust enough to carry out meaningful calculations, and how to design and implement these protocols is itself a large area of research. For an overview of quantum computing, with more detail regarding experimental implementations, see Ladd et al. (2010).
In this article, “quantum computing” has so far been used as a blanket term describing all computations that utilize quantum phenomena. There are actually multiple types of operational frameworks. Logical, gate-based quantum computing is probably the best recognized. In it, qubits are prepared in initial states and then subject to a series of “gate operations,” like current or laser pulses depending on qubit type. Through these gates the qubits are put in superpositions, entangled, and subjected to logic operations like the AND, OR, and NOT gates of traditional computation. The qubits are then measured and a result obtained.
Another framework is measurement-based computation, in which highly entangled qubits serve as the starting point. Then, instead of performing manipulation operations on qubits, single qubit measurements are performed, leaving the targeted single qubit in a definitive state. Based on the result, further measurements are carried out on other qubits and eventually an answer is reached.
A third framework is topological computation, in which qubits and operations are based on quasiparticles and their braiding operations. While nascent implementations of the components of topological quantum computers have yet to be demonstrated, the approach is attractive because these systems are theoretically protected against noise, which destroys the coherence of other qubits.
https://www.ncbi.nlm.nih.gov/books/NBK538701/ |
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | I'm starting up a business and need to decide very soon if I should have a brick-and-mortar or an online business, or if it's worth it to have both. Summarize key points in this article regarding the shift to online shopping and what I need to consider if I choose to operate solely online. Please make sure its 400 words or less. | Just as hybrid office-remote work arrangements have become more common, retail is becoming increasingly hybrid. The line between brick and mortar and e-commerce is now blurry, and in this evolving retail environment, many entrepreneurs are wondering the best way to position their small businesses for a prosperous year. Industry experts offered their predictions for the retail trends to watch this year.
Tech will continue to shape retail
Technology is always remaking the retail environment, from creating self-service kiosks in stores to supporting the e-commerce boom. However, with the rapid evolution of digital technology, artificial intelligence and machine learning — and the need for retailers to stand out from the e-commerce crowd — tech adoption has become more pressing.
The rate at which business owners opened e-commerce shops and consumers shopped online increased dramatically during the COVID-19 pandemic, and the trend has continued into the post-pandemic era.
“Companies that put e-commerce at the heart of their business strategies are prepared for the post-COVID-19 era,” said Yomi Kastro, founder and CEO of e-commerce platform Inveon. “There is an enormous opportunity for industries that are still more used to physical shopping, such as fast-moving consumer goods and pharmaceuticals.”
Even brick-and-mortar retailers should have some semblance of an e-commerce presence. Many consumers prefer to shop from the comfort of their own homes or at the spur of the moment. Consider offering your bestselling items in an e-commerce store to give your customers the option of purchasing your most popular products without visiting your physical location.
Retailers are reducing packaging waste.
As consumers become more environmentally conscious, look for retailers and brands to reduce their packaging waste, said Anthony Martin, CEO of Choice Mutual Insurance Agency. “Organizations will be more conscious about using recyclable or biodegradable packaging materials, which can be easily recycled or break down naturally in the environment,” he said.
The movement toward more sustainable packaging can be seen in the emergence of companies developing new packaging materials out of plant-derived materials. Furniture giant IKEA, for example, said it plans to eliminate plastic packaging for new products by 2025 and all products by 2028. Many other large brands are moving in the same direction.
Technology will shape retail workforce management.
Hiring and retaining quality workers will likely remain a challenge for retailers. One way businesses can adapt is by offering hybrid roles in which salespeople interact with customers online, thereby increasing their talent pool and reducing geographical limitations on the workers they can recruit.
“An hourly workforce has unique pay rules, labor regulations, compliance obligations and scheduling needs,” he said. “Companies need technology that can support their growing requirements and evolve with their business. The pandemic [accelerated] the adoption of new digital technologies, which can save organizations money by increasing efficiencies and improving the experience of their employees.”
Did You Know?
Consumers are increasingly comfortable shopping online, which means retailers will need to have an e-commerce presence to see success in [year]. To get your online shop started the right way, check out our article on overcoming common e-commerce challenges.
U.S. retailers need to protect consumer data
Consumer data protection will continue to be a major concern for U.S. retailers. High-profile data breaches not only cost retailers money but also damage their brands. And the stakes have only been heightened since the adoption of consumer data privacy laws such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).
“With the landscape of Google getting rid of third-party cookies and Apple accelerating its privacy-first approach, businesses of all sizes will become under greater scrutiny for how they protect customer data,” said Steffen Schebesta, CEO of global digital marketing platform Sendinblue. “My sense is that the learning curve will be steep for American companies, and if penalties apply, the lessons will be expensive. It will get especially messy if the regulations are not enacted on a federal level. This would force businesses to deal with state-by-state regulations.”
While retail businesses collect consumer data for valid purposes, they need to ensure that information is highly protected from cyberattacks. Failure to defend sensitive customer data, such as financial information, could lead to massive lawsuits and fines, not to mention serious damage to brand reputation and lost opportunities for future business.
Micro fulfillment centers bring merchandise closer to buyers
With the e-commerce boom of recent years, massive fulfillment centers became a symbol of commerce. Jonathan Morav, head of product strategy at retail fulfillment company Fabric, said companies are now looking more to micro fulfillment centers, far smaller facilities that can be located closer to the residential areas where their customers live. They also enable businesses to take advantage of falling commercial real estate prices in downtown cities.
Underutilized space in malls and parking lots can hold micro fulfillment centers. While an average Amazon fulfillment center is around 800,000 square feet, a micro fulfillment center typically takes up less than 50,000 square feet and is often as small as 10,000 square feet, while still fulfilling the expectations of fast, free shipping, Morav said.
Brick-and-mortar retailers will create special customer experiences
Brick-and-mortar retailers have faced significant challenges as the industry has evolved. Retailers have a better chance of being successful if they create unique and memorable customer experiences by personalizing the shopping journey and making it fun.
Personal shopping services will increase.
Personal shopping services include online preorder and pickup, personal shoppers who walk customers through the showroom floor and much more. Personal shopping services create an extra layer of customer service that can enhance the overall experience and is difficult for e-commerce stores to replicate.
Did You Know?
Personalized shopping services don’t just enhance the customer experience; they create opportunities to cross-sell and upsell customers to drive more revenue.
Automation in pricing will continue to rise
In recent years, more and more companies have relied on automated technology to ensure their prices are properly set. Expect automated pricing tech to become even more commonplace, helping retailers reduce the amount of labor required by their staff.
Omri Traub, an executive at restaurant point-of-sale software company Toast, expects automation to play an even bigger part in this arena going forward. He pointed to “a new wave of companies” that provide such automation solutions as a service. Once implemented, he said, the tech will provide “low implementation costs and [reductions in] operating costs.”
Marketing and customer engagement will see changes.
Customers now primarily engage with small retail stores in an online, mobile-friendly model. That’s shifted the paradigm for marketers and customer engagement specialists, and the trend will continue this year.
Social media will continue to introduce customers to brands.
Social media is a major driver of the customer journey and online sales for many companies. Going forward, experts expect that hashtags and meme culture will play as large a role as traditional advertising methods for successful small businesses and their younger customers. One way that will happen, Gabor said, is through “creative social commerce,” in which platforms such as TikTok and Instagram fuel online shopping.
This includes experiential marketing, such as live events that are streamed over social media platforms. Whether people engage with the brand in person or online, these types of events offer widespread exposure.
Influencers will keep playing a big role.
Love them or hate them, influencers will remain relevant for nearly every retail brand. With companies highlighting authentic voices, Gabor said, consumers will be able to look for leadership among those individuals.
Key Takeaway
Consumers expect to be able to interact with your brand from their mobile devices, and they will continue looking to influencers for brand and product recommendations. | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
I'm starting up a business and need to decide very soon if I should have a brick-and-mortar or an online business, or if it's worth it to have both. Summarize key points in this article regarding the shift to online shopping and what I need to consider if I choose to operate solely online. Please make sure its 400 words or less.
<TEXT>
Just as hybrid office-remote work arrangements have become more common, retail is becoming increasingly hybrid. The line between brick and mortar and e-commerce is now blurry, and in this evolving retail environment, many entrepreneurs are wondering the best way to position their small businesses for a prosperous year. Industry experts offered their predictions for the retail trends to watch this year.
Tech will continue to shape retail
Technology is always remaking the retail environment, from creating self-service kiosks in stores to supporting the e-commerce boom. However, with the rapid evolution of digital technology, artificial intelligence and machine learning — and the need for retailers to stand out from the e-commerce crowd — tech adoption has become more pressing.
The rate at which business owners opened e-commerce shops and consumers shopped online increased dramatically during the COVID-19 pandemic, and the trend has continued into the post-pandemic era.
“Companies that put e-commerce at the heart of their business strategies are prepared for the post-COVID-19 era,” said Yomi Kastro, founder and CEO of e-commerce platform Inveon. “There is an enormous opportunity for industries that are still more used to physical shopping, such as fast-moving consumer goods and pharmaceuticals.”
Even brick-and-mortar retailers should have some semblance of an e-commerce presence. Many consumers prefer to shop from the comfort of their own homes or at the spur of the moment. Consider offering your bestselling items in an e-commerce store to give your customers the option of purchasing your most popular products without visiting your physical location.
Retailers are reducing packaging waste.
As consumers become more environmentally conscious, look for retailers and brands to reduce their packaging waste, said Anthony Martin, CEO of Choice Mutual Insurance Agency. “Organizations will be more conscious about using recyclable or biodegradable packaging materials, which can be easily recycled or break down naturally in the environment,” he said.
The movement toward more sustainable packaging can be seen in the emergence of companies developing new packaging materials out of plant-derived materials. Furniture giant IKEA, for example, said it plans to eliminate plastic packaging for new products by 2025 and all products by 2028. Many other large brands are moving in the same direction.
Technology will shape retail workforce management.
Hiring and retaining quality workers will likely remain a challenge for retailers. One way businesses can adapt is by offering hybrid roles in which salespeople interact with customers online, thereby increasing their talent pool and reducing geographical limitations on the workers they can recruit.
“An hourly workforce has unique pay rules, labor regulations, compliance obligations and scheduling needs,” he said. “Companies need technology that can support their growing requirements and evolve with their business. The pandemic [accelerated] the adoption of new digital technologies, which can save organizations money by increasing efficiencies and improving the experience of their employees.”
Did You Know?
Consumers are increasingly comfortable shopping online, which means retailers will need to have an e-commerce presence to see success in [year]. To get your online shop started the right way, check out our article on overcoming common e-commerce challenges.
U.S. retailers need to protect consumer data
Consumer data protection will continue to be a major concern for U.S. retailers. High-profile data breaches not only cost retailers money but also damage their brands. And the stakes have only been heightened since the adoption of consumer data privacy laws such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).
“With the landscape of Google getting rid of third-party cookies and Apple accelerating its privacy-first approach, businesses of all sizes will become under greater scrutiny for how they protect customer data,” said Steffen Schebesta, CEO of global digital marketing platform Sendinblue. “My sense is that the learning curve will be steep for American companies, and if penalties apply, the lessons will be expensive. It will get especially messy if the regulations are not enacted on a federal level. This would force businesses to deal with state-by-state regulations.”
While retail businesses collect consumer data for valid purposes, they need to ensure that information is highly protected from cyberattacks. Failure to defend sensitive customer data, such as financial information, could lead to massive lawsuits and fines, not to mention serious damage to brand reputation and lost opportunities for future business.
Micro fulfillment centers bring merchandise closer to buyers
With the e-commerce boom of recent years, massive fulfillment centers became a symbol of commerce. Jonathan Morav, head of product strategy at retail fulfillment company Fabric, said companies are now looking more to micro fulfillment centers, far smaller facilities that can be located closer to the residential areas where their customers live. They also enable businesses to take advantage of falling commercial real estate prices in downtown cities.
Underutilized space in malls and parking lots can hold micro fulfillment centers. While an average Amazon fulfillment center is around 800,000 square feet, a micro fulfillment center typically takes up less than 50,000 square feet and is often as small as 10,000 square feet, while still fulfilling the expectations of fast, free shipping, Morav said.
Brick-and-mortar retailers will create special customer experiences
Brick-and-mortar retailers have faced significant challenges as the industry has evolved. Retailers have a better chance of being successful if they create unique and memorable customer experiences by personalizing the shopping journey and making it fun.
Personal shopping services will increase.
Personal shopping services include online preorder and pickup, personal shoppers who walk customers through the showroom floor and much more. Personal shopping services create an extra layer of customer service that can enhance the overall experience and is difficult for e-commerce stores to replicate.
Did You Know?
Personalized shopping services don’t just enhance the customer experience; they create opportunities to cross-sell and upsell customers to drive more revenue.
Automation in pricing will continue to rise
In recent years, more and more companies have relied on automated technology to ensure their prices are properly set. Expect automated pricing tech to become even more commonplace, helping retailers reduce the amount of labor required by their staff.
Omri Traub, an executive at restaurant point-of-sale software company Toast, expects automation to play an even bigger part in this arena going forward. He pointed to “a new wave of companies” that provide such automation solutions as a service. Once implemented, he said, the tech will provide “low implementation costs and [reductions in] operating costs.”
Marketing and customer engagement will see changes.
Customers now primarily engage with small retail stores in an online, mobile-friendly model. That’s shifted the paradigm for marketers and customer engagement specialists, and the trend will continue this year.
Social media will continue to introduce customers to brands.
Social media is a major driver of the customer journey and online sales for many companies. Going forward, experts expect that hashtags and meme culture will play as large a role as traditional advertising methods for successful small businesses and their younger customers. One way that will happen, Gabor said, is through “creative social commerce,” in which platforms such as TikTok and Instagram fuel online shopping.
This includes experiential marketing, such as live events that are streamed over social media platforms. Whether people engage with the brand in person or online, these types of events offer widespread exposure.
Influencers will keep playing a big role.
Love them or hate them, influencers will remain relevant for nearly every retail brand. With companies highlighting authentic voices, Gabor said, consumers will be able to look for leadership among those individuals.
Key Takeaway
Consumers expect to be able to interact with your brand from their mobile devices, and they will continue looking to influencers for brand and product recommendations.
https://www.businessnewsdaily.com/9836-future-of-retail.html |
Use only the information provided in the text above. Do not use any external resources or prior knowledge. | Summarize the proposed changes to Dodd-Frank 1. Explain it in easy to understand language in a numbered list. | 18 While commentators
generally agree that maturity transformation is socially valuable,19 the process makes financial nstitutions vulnerable to liquidity “runs.”
20 That is, when a financial institution’s short-term
creditors become concerned about its solvency or liquidity, they have incentives to demand
immediate conversion of their claims into cash,21 or to reduce their exposure in other ways that
force the institution to sell its illiquid assets at significantly discounted prices.22
A “run” on one financial institution can spread to other institutions that do business with it.23
Small banks typically hold deposit balances at larger banks, and large banks, securities firms, and
insurance companies often face significant exposure to one another through their over-the-counter
derivatives portfolios.24 Accordingly, troubles at one financial institution can spread to others,
resulting in additional “runs” and a “contagious panic throughout the financial system that causes
otherwise solvent financial institutions to become insolvent.”
25 This type of financial “contagion”
can cause asset price implosions as institutions liquidate assets in order to meet creditor demands,
further impairing their ability to lend and the ability of businesses to raise capital.26 Faced with a
choice between bailouts and economic collapse, policymakers have generally opted for bailouts,27
70 Among other things, Dodd-Frank reformed certain aspects of securities and
derivatives markets,71 imposed a variety of requirements related to mortgage standards,
72 and
created a new federal agency tasked with consumer financial protection (the Consumer Financial
Protection Bureau).73 Other portions of Dodd-Frank are specifically directed at the systemic risk
created by TBTF financial institutions. In order to minimize the risks that large financial
institutions like Lehman and AIG fail, Title I of Dodd-Frank establishes an enhanced prudential
regulatory regime for certain large bank holding companies and non-bank financial companies.74
And in order to resolve systemically important financial institutions in the event that they
nevertheless experience financial distress, Title II establishes a new resolution regime available
for such institutions outside of the Bankruptcy Code.75 The remaining sections of this report
discuss the legal issues raised by Titles I and II, their implementation by federal regulatory
agencies, and proposals to reform them.
Regulators have traditionally relied upon a variety of tools to minimize the risks of financial
institution failures. In order to reduce the risk of insolvency, regulators have imposed capital
requirements on commercial and investment banks.76 In order to reduce depositors’ incentives to “run,” regulators require all commercial banks to obtain minimum levels of deposit insurance
from the Federal Deposit Insurance Corporation (FDIC).77 In order to address liquidity problems,
the Federal Reserve has the authority to serve as a “lender of last resort” by making “discount
window” loans to commercial banks.78 Moreover, the Federal Reserve can lend to non-banks in
“unusual and exigent circumstances” pursuant to its authority under Section 13(3) of the Federal
Reserve Act.79 However, as the 2007-2009 financial crisis arguably demonstrated, sometimes
these measures have proven insufficient to prevent financial institution failures.
In response to these concerns, Title I of Dodd-Frank establishes an enhanced prudential
regulatory regime for certain large financial institutions.80 Specifically, the Title I regime applies
to (1) all bank holding companies with total consolidated assets of $50 billion or more, and (2)
any non-bank financial companies81 that the Financial Stability Oversight Council (FSOC)82
designates as systemically important.83 Section 165 of Dodd-Frank directs the Federal Reserve to
impose prudential standards on these institutions that “are more stringent than” those applicable
to other bank holding companies and non-bank financial companies, and that “increase in
stringency” based on certain statutorily-prescribed considerations.84 These enhanced standards
include
1. risk-based capital requirements and leverage limits;
85
2. liquidity requirements;
86
3. overall risk management requirements;
87
4. a requirement that the relevant companies develop resolution plans (so-called
“living wills”) describing how they can be rapidly resolved in the event of
material distress or failure;
88 and
5. credit exposure reporting requirements.89
Congress is currently considering whether to change the first basis for imposition of enhanced
prudential regulations on financial institutions—the automatic $50 billion threshold for bank
holding companies.90 That policy question is addressed in another recent Congressional Research
Service report.91 This section of the report accordingly provides a legal overview of (1) FSOC’s
process for designating non-banks as systemically important and FSOC’s designations to date, (2)
criticisms of FSOC’s designation process and responses, and (3) proposals to reform FSOC’s
designation process.
Proposed Legislation
A number of bills that would alter FSOC’s authority to designate non-banks for enhanced
regulation have been introduced in the 115th Congress. The Financial CHOICE Act of 2017, as
passed by the House of Representatives in June 2017, would repeal FSOC’s authority to
designate non-banks for enhanced regulation altogether.167 H.R. 4061, the Financial Stability
Oversight Council Improvement Act of 2017, which was reported out of the House Committee on
Financial Services in March 2018, proposes more limited changes to FSOC’s authority.168
Specifically, H.R. 4061 would require FSOC to consider “the appropriateness of the imposition of
prudential standards as opposed to other forms of regulation to mitigate the identified risks” in
determining whether to designate a non-bank as systemically important.169 The bill would further
require that FSOC provide designated companies with the opportunity to submit written materials
contesting their designation during FSOC’s annual reevaluation process.170 If FSOC determines
during a re-evaluation that a designation should not be rescinded, the bill would require it to
provide notice to the designated company “address[ing] with specificity” how it assessed the
relevant statutory factors in light of the company’s written submissions.171
The Trump Administration’s Views
In November 2017, the Trump Administration’s Treasury Department released a report outlining
four general recommendations for reforming FSOC’s process for designating non-banks as
systemically important.172 First, the report recommended that FSOC adopt an “activities-based” or “industry-wide” approach to assessing potential risks posed by non-banks.173 Under this
approach, FSOC would prioritize identifying specific financial activities and products that could
pose risks to financial stability, work with the primary financial regulatory agencies to address
those specific risks, and consider individual firms for designation as systemically important only
as a matter of last resort if more limited actions aimed at mitigating discrete risks are insufficient
to safeguard financial stability.174
Second, the Treasury Department recommended that FSOC “increas[e] the analytical rigor” of its
designation analyses.175 Specifically, the Report recommended that FSOC: (1) consider any
factors that might mitigate the exposure of a firm’s creditors and counterparties to its financial
distress; (2) focus on “plausible” (and not merely “possible”) asset liquidation risks; (3) evaluate
the likelihood that a firm will experience financial distress before evaluating how that distress
could be transmitted to other firms; (4) consider the benefits and costs of designations; and
(5) collapse its three-stage review process into two steps, notifying companies that they are under
active review during Stage 1 and voting on proposed designations after the completion of Stage
2.176
Third, the Treasury Department recommended enhancing engagement between FSOC and
companies under review, and improving the designation process’s transparency.177 Specifically,
the report recommended that FSOC: (1) engage earlier with companies under review and “explain
... the key risks” that FSOC has identified, (2) “undertake greater engagement” with companies’
primary financial regulators, and (3) publicly release explanations of its designation decisions.178
Fourth, the Treasury Department recommended that FSOC provide “a clear off-ramp” for nonbanks designated as systemically important.179 The report recommended that FSOC: (1) highlight
the key risks that led to a company’s designation, (2) “adopt a more robust and transparent
process for its annual reevaluations” that “make[s] clear how companies can engage with FSOC
... and what information companies should submit during a reevaluation,” (3) “develop a process
to enable a designated company to discuss potential changes it could make to address the risks it
could pose to financial stability,” and (4) “make clear that the standard it applies in its annual
reevaluations is the same as the standard for an initial designation of a nonbank financial
company.” | Summarize the proposed changes to Dodd-Frank 1. Explain it in easy to understand language in a numbered list.
18 While commentators
generally agree that maturity transformation is socially valuable,19 the process makes financial nstitutions vulnerable to liquidity “runs.”
20 That is, when a financial institution’s short-term
creditors become concerned about its solvency or liquidity, they have incentives to demand
immediate conversion of their claims into cash,21 or to reduce their exposure in other ways that
force the institution to sell its illiquid assets at significantly discounted prices.22
A “run” on one financial institution can spread to other institutions that do business with it.23
Small banks typically hold deposit balances at larger banks, and large banks, securities firms, and
insurance companies often face significant exposure to one another through their over-the-counter
derivatives portfolios.24 Accordingly, troubles at one financial institution can spread to others,
resulting in additional “runs” and a “contagious panic throughout the financial system that causes
otherwise solvent financial institutions to become insolvent.”
25 This type of financial “contagion”
can cause asset price implosions as institutions liquidate assets in order to meet creditor demands,
further impairing their ability to lend and the ability of businesses to raise capital.26 Faced with a
choice between bailouts and economic collapse, policymakers have generally opted for bailouts,27
70 Among other things, Dodd-Frank reformed certain aspects of securities and
derivatives markets,71 imposed a variety of requirements related to mortgage standards,
72 and
created a new federal agency tasked with consumer financial protection (the Consumer Financial
Protection Bureau).73 Other portions of Dodd-Frank are specifically directed at the systemic risk
created by TBTF financial institutions. In order to minimize the risks that large financial
institutions like Lehman and AIG fail, Title I of Dodd-Frank establishes an enhanced prudential
regulatory regime for certain large bank holding companies and non-bank financial companies.74
And in order to resolve systemically important financial institutions in the event that they
nevertheless experience financial distress, Title II establishes a new resolution regime available
for such institutions outside of the Bankruptcy Code.75 The remaining sections of this report
discuss the legal issues raised by Titles I and II, their implementation by federal regulatory
agencies, and proposals to reform them.
Regulators have traditionally relied upon a variety of tools to minimize the risks of financial
institution failures. In order to reduce the risk of insolvency, regulators have imposed capital
requirements on commercial and investment banks.76 In order to reduce depositors’ incentives to “run,” regulators require all commercial banks to obtain minimum levels of deposit insurance
from the Federal Deposit Insurance Corporation (FDIC).77 In order to address liquidity problems,
the Federal Reserve has the authority to serve as a “lender of last resort” by making “discount
window” loans to commercial banks.78 Moreover, the Federal Reserve can lend to non-banks in
“unusual and exigent circumstances” pursuant to its authority under Section 13(3) of the Federal
Reserve Act.79 However, as the 2007-2009 financial crisis arguably demonstrated, sometimes
these measures have proven insufficient to prevent financial institution failures.
In response to these concerns, Title I of Dodd-Frank establishes an enhanced prudential
regulatory regime for certain large financial institutions.80 Specifically, the Title I regime applies
to (1) all bank holding companies with total consolidated assets of $50 billion or more, and (2)
any non-bank financial companies81 that the Financial Stability Oversight Council (FSOC)82
designates as systemically important.83 Section 165 of Dodd-Frank directs the Federal Reserve to
impose prudential standards on these institutions that “are more stringent than” those applicable
to other bank holding companies and non-bank financial companies, and that “increase in
stringency” based on certain statutorily-prescribed considerations.84 These enhanced standards
include
1. risk-based capital requirements and leverage limits;
85
2. liquidity requirements;
86
3. overall risk management requirements;
87
4. a requirement that the relevant companies develop resolution plans (so-called
“living wills”) describing how they can be rapidly resolved in the event of
material distress or failure;
88 and
5. credit exposure reporting requirements.89
Congress is currently considering whether to change the first basis for imposition of enhanced
prudential regulations on financial institutions—the automatic $50 billion threshold for bank
holding companies.90 That policy question is addressed in another recent Congressional Research
Service report.91 This section of the report accordingly provides a legal overview of (1) FSOC’s
process for designating non-banks as systemically important and FSOC’s designations to date, (2)
criticisms of FSOC’s designation process and responses, and (3) proposals to reform FSOC’s
designation process.
Proposed Legislation
A number of bills that would alter FSOC’s authority to designate non-banks for enhanced
regulation have been introduced in the 115th Congress. The Financial CHOICE Act of 2017, as
passed by the House of Representatives in June 2017, would repeal FSOC’s authority to
designate non-banks for enhanced regulation altogether.167 H.R. 4061, the Financial Stability
Oversight Council Improvement Act of 2017, which was reported out of the House Committee on
Financial Services in March 2018, proposes more limited changes to FSOC’s authority.168
Specifically, H.R. 4061 would require FSOC to consider “the appropriateness of the imposition of
prudential standards as opposed to other forms of regulation to mitigate the identified risks” in
determining whether to designate a non-bank as systemically important.169 The bill would further
require that FSOC provide designated companies with the opportunity to submit written materials
contesting their designation during FSOC’s annual reevaluation process.170 If FSOC determines
during a re-evaluation that a designation should not be rescinded, the bill would require it to
provide notice to the designated company “address[ing] with specificity” how it assessed the
relevant statutory factors in light of the company’s written submissions.171
The Trump Administration’s Views
In November 2017, the Trump Administration’s Treasury Department released a report outlining
four general recommendations for reforming FSOC’s process for designating non-banks as
systemically important.172 First, the report recommended that FSOC adopt an “activities-based” or “industry-wide” approach to assessing potential risks posed by non-banks.173 Under this
approach, FSOC would prioritize identifying specific financial activities and products that could
pose risks to financial stability, work with the primary financial regulatory agencies to address
those specific risks, and consider individual firms for designation as systemically important only
as a matter of last resort if more limited actions aimed at mitigating discrete risks are insufficient
to safeguard financial stability.174
Second, the Treasury Department recommended that FSOC “increas[e] the analytical rigor” of its
designation analyses.175 Specifically, the Report recommended that FSOC: (1) consider any
factors that might mitigate the exposure of a firm’s creditors and counterparties to its financial
distress; (2) focus on “plausible” (and not merely “possible”) asset liquidation risks; (3) evaluate
the likelihood that a firm will experience financial distress before evaluating how that distress
could be transmitted to other firms; (4) consider the benefits and costs of designations; and
(5) collapse its three-stage review process into two steps, notifying companies that they are under
active review during Stage 1 and voting on proposed designations after the completion of Stage
2.176
Third, the Treasury Department recommended enhancing engagement between FSOC and
companies under review, and improving the designation process’s transparency.177 Specifically,
the report recommended that FSOC: (1) engage earlier with companies under review and “explain
... the key risks” that FSOC has identified, (2) “undertake greater engagement” with companies’
primary financial regulators, and (3) publicly release explanations of its designation decisions.178
Fourth, the Treasury Department recommended that FSOC provide “a clear off-ramp” for nonbanks designated as systemically important.179 The report recommended that FSOC: (1) highlight
the key risks that led to a company’s designation, (2) “adopt a more robust and transparent
process for its annual reevaluations” that “make[s] clear how companies can engage with FSOC
... and what information companies should submit during a reevaluation,” (3) “develop a process
to enable a designated company to discuss potential changes it could make to address the risks it
could pose to financial stability,” and (4) “make clear that the standard it applies in its annual
reevaluations is the same as the standard for an initial designation of a nonbank financial
company.”
Use only the information provided in the text above. Do not use any external resources or prior knowledge. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | Should someone be worried about acquiring an infection in the hospital? Why? Why is this a bad thing on a governmental level?
What is the difference between an uncomplicated UTI and a complicated UTI?
Why is a polymicrobial bacteraemic UTIs particularly serious? How do these cases occur? | Healthcare-associated infections (HAIs) represent a significant public health burden, where an estimated 1 in 31 patients that enter a hospital will go on to develop an HAI, resulting in an additional $28.4 billion in healthcare-related expenses and 100,000 deaths1,2. Among HAIs, urinary tract infections (UTIs) are the most common, affecting over 400 million people worldwide in 2019 alone3. UTIs can be distinguished as complicated or uncomplicated. Up to 75–85% of uncomplicated UTIs are caused by uropathogenic Escherichia coli (UPEC)4,5,6. Unlike uncomplicated UTIs, catheter-associated UTI (CAUTI) is caused by a diverse range of pathogens, including UPEC (23.9%), fungal Candida spp. (17.8%), Enterococcus spp. (13.8%), P. aeruginosa (10.3%), and Klebsiella sp. (10.1%)7,8. Uncomplicated UTIs (uUTIs) predominantly affect pre-menopausal and non-pregnant women and occur in otherwise healthy individuals with no functional or structural abnormalities of the kidneys or urinary tract4. Complicated UTIs (cUTIs) occur in individuals with additional risk factors, including underlying health conditions and abnormalities or obstructions in the kidneys and urinary tract, pregnancy, and intermittent or long-term catheterization4.
It is estimated that 15-25% of hospitalized patients will receive a urinary catheter and that 75% of healthcare-acquired UTIs are associated with catheterization9. Furthermore, the risk of bacterial colonization of a catheter increases 3–7% per day upon placement, with the risk of catheter colonization and associated complications near 100% in long-term catheterized patients10,11. The treatment of UTIs has been complicated by the rise of antimicrobial-resistant (AMR) uropathogens, many of which were cited as urgent or serious threats in the CDC’s 2019 Antibiotic Threats Report. These AMR pathogens were classified as: (i) urgent, carbapenem-resistant Acinetobacter and carbapenem-resistant Enterobacterales; and (ii) serious, drug-resistant Candida, ESBL- Enterobacterales, vancomycin-resistant enterococci (VRE), multidrug-resistant Pseudomonas aeruginosa, and methicillin-resistant Staphylococcus aureus (MRSA)12. Indeed, a 2019 report listed UTIs as one of the leading global causes of AMR-associated deaths13. As these uropathogens become increasingly antibiotic-resistant, new treatment strategies are necessary to combat the rise of antibiotic-resistant UTIs.
Another factor complicating the treatment of CAUTIs is the high prevalence of polymicrobial catheter colonization, with 31–87% of catheters and urine from catheterized patients colonized by two or more species, depending on the study14,15,16. Polymicrobial bacteraemic UTIs are associated with increased mortality relative to bacteraemic UTIs caused by a single uropathogen17. The relationships between the bacteria in these polymicrobial communities is still poorly understood, including the mechanistic drivers of the positive species interactions that promote species co-occurrence and the negative species interactions that prevent co-occurrence in the same microenvironment. During urinary catheterization, the host wounding response results in the deposition of host proteins, including fibrinogen, onto the catheter that can be used by several bacterial species as a food source and as a means to adhere to and form biofilms on the catheter18,19,20. Positive co-occurrences between species may thus result from pioneer species that are able to colonize the catheter and then recruit other microbial species, forming polymicrobial communities. While this has been demonstrated to occur in the polymicrobial communities of oral biofilms, wherein certain groups of bacteria, including streptococci and Actinomyces species, serve as the pioneering species that can adhere to enamel pellicle and facilitate the colonization of other microbial species, the establishment of pioneer species and the subsequent effects on colonization in CAUTI has yet to be fully elucidated21,22. Within these communities, “cross-feeding” or “cross-signaling” may occur to augment the growth and/or biofilm formation of one or more of the species. Further, it has been shown that E. faecalis can suppress the host innate immune response, allowing for augmented growth of E. coli in a co-infection model of CAUTI23. Negative co-occurrences may be due to competition for nutrients or alterations in host response. For instance, MRSA elicits a robust host response which may result in decreased co-colonization by other bacterial species that are unable to withstand this response24. Understanding the mechanisms that drive the formation and persistence of these communities is essential to designing new therapeutics that target the bacterial interactions that promote CAUTI. | [question]
Should someone be worried about acquiring an infection in the hospital? Why? Why is this a bad thing on a governmental level?
What is the difference between an uncomplicated UTI and a complicated UTI?
Why is a polymicrobial bacteraemic UTIs particularly serious? How do these cases occur?
=====================
[text]
Healthcare-associated infections (HAIs) represent a significant public health burden, where an estimated 1 in 31 patients that enter a hospital will go on to develop an HAI, resulting in an additional $28.4 billion in healthcare-related expenses and 100,000 deaths1,2. Among HAIs, urinary tract infections (UTIs) are the most common, affecting over 400 million people worldwide in 2019 alone3. UTIs can be distinguished as complicated or uncomplicated. Up to 75–85% of uncomplicated UTIs are caused by uropathogenic Escherichia coli (UPEC)4,5,6. Unlike uncomplicated UTIs, catheter-associated UTI (CAUTI) is caused by a diverse range of pathogens, including UPEC (23.9%), fungal Candida spp. (17.8%), Enterococcus spp. (13.8%), P. aeruginosa (10.3%), and Klebsiella sp. (10.1%)7,8. Uncomplicated UTIs (uUTIs) predominantly affect pre-menopausal and non-pregnant women and occur in otherwise healthy individuals with no functional or structural abnormalities of the kidneys or urinary tract4. Complicated UTIs (cUTIs) occur in individuals with additional risk factors, including underlying health conditions and abnormalities or obstructions in the kidneys and urinary tract, pregnancy, and intermittent or long-term catheterization4.
It is estimated that 15-25% of hospitalized patients will receive a urinary catheter and that 75% of healthcare-acquired UTIs are associated with catheterization9. Furthermore, the risk of bacterial colonization of a catheter increases 3–7% per day upon placement, with the risk of catheter colonization and associated complications near 100% in long-term catheterized patients10,11. The treatment of UTIs has been complicated by the rise of antimicrobial-resistant (AMR) uropathogens, many of which were cited as urgent or serious threats in the CDC’s 2019 Antibiotic Threats Report. These AMR pathogens were classified as: (i) urgent, carbapenem-resistant Acinetobacter and carbapenem-resistant Enterobacterales; and (ii) serious, drug-resistant Candida, ESBL- Enterobacterales, vancomycin-resistant enterococci (VRE), multidrug-resistant Pseudomonas aeruginosa, and methicillin-resistant Staphylococcus aureus (MRSA)12. Indeed, a 2019 report listed UTIs as one of the leading global causes of AMR-associated deaths13. As these uropathogens become increasingly antibiotic-resistant, new treatment strategies are necessary to combat the rise of antibiotic-resistant UTIs.
Another factor complicating the treatment of CAUTIs is the high prevalence of polymicrobial catheter colonization, with 31–87% of catheters and urine from catheterized patients colonized by two or more species, depending on the study14,15,16. Polymicrobial bacteraemic UTIs are associated with increased mortality relative to bacteraemic UTIs caused by a single uropathogen17. The relationships between the bacteria in these polymicrobial communities is still poorly understood, including the mechanistic drivers of the positive species interactions that promote species co-occurrence and the negative species interactions that prevent co-occurrence in the same microenvironment. During urinary catheterization, the host wounding response results in the deposition of host proteins, including fibrinogen, onto the catheter that can be used by several bacterial species as a food source and as a means to adhere to and form biofilms on the catheter18,19,20. Positive co-occurrences between species may thus result from pioneer species that are able to colonize the catheter and then recruit other microbial species, forming polymicrobial communities. While this has been demonstrated to occur in the polymicrobial communities of oral biofilms, wherein certain groups of bacteria, including streptococci and Actinomyces species, serve as the pioneering species that can adhere to enamel pellicle and facilitate the colonization of other microbial species, the establishment of pioneer species and the subsequent effects on colonization in CAUTI has yet to be fully elucidated21,22. Within these communities, “cross-feeding” or “cross-signaling” may occur to augment the growth and/or biofilm formation of one or more of the species. Further, it has been shown that E. faecalis can suppress the host innate immune response, allowing for augmented growth of E. coli in a co-infection model of CAUTI23. Negative co-occurrences may be due to competition for nutrients or alterations in host response. For instance, MRSA elicits a robust host response which may result in decreased co-colonization by other bacterial species that are unable to withstand this response24. Understanding the mechanisms that drive the formation and persistence of these communities is essential to designing new therapeutics that target the bacterial interactions that promote CAUTI.
https://www.nature.com/articles/s41467-023-44095-0
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Pull your answer from the text below only. | How dangerous is meningitis? | Bacterial meningitis is a rare but potentially fatal disease. Several types of bacteria can first cause an upper respiratory tract infection and then travel through the bloodstream to the brain. The disease also can occur when certain bacteria invade the meninges directly. Bacterial meningitis can cause stroke, hearing loss, and permanent brain damage. Pneumococcal meningitis is the most common form of meningitis and is the most serious form of bacterial meningitis. Some 6,000 cases of pneumococcal meningitis are reported in the United States each year. The disease is caused by the bacterium Streptococcus pneumoniae, which also causes pneumonia, blood poisoning (septicemia), and ear and sinus infections. At particular risk are children under age 12 and adults with a weakened immune system. People who have had pneumococcal meningitis often suffer neurological damage ranging from deafness to severe brain damage. Meningococcal meningitis is caused by the bacterium Neisseria meningitudes. Each year in the United States about 2,600 people get this highly contagious disease. High-risk groups include infants under the age of 1 year, people with suppressed immune systems, travelers to foreign countries where the disease is endemic, and military recruits and others who reside in dormitories. Between 10-15 percent of cases are fatal, with another 10-15 percent causing brain damage and other serious side effects. 5 Haemophilus influenzae meningitis was at one time the most common form of bacterial meningitis. Fortunately, the Haemophilus influenzae b vaccine has greatly reduced the number of cases in the United States (see Treatment section). Those most at risk of getting this disease are children in child-care settings and children who do not have access to this vaccine. Other forms of bacterial meningitis include Listeria monocytogenes meningitis (in which certain foods such as unpasteurized dairy or deli meats are sometimes implicated); Escherichia coli meningitis, which is most common in elderly adults and newborns and may be transmitted to a baby through the birth canal; and Mycobacterium tuberculosis meningitis, a rare disease that occurs when the bacterium that causes tuberculosis attacks the meninges. Viral, or aseptic, meningitis is usually caused by enteroviruses—common viruses that enter the body through the mouth and travel to the brain and surrounding tissues where they multiply. Enteroviruses are present in mucus, saliva, and feces, and can be transmitted through direct contact with an infected person or an infected object or surface. Other viruses that cause meningitis include Varicella zoster (the virus that causes chicken pox and can appear decades later as shingles), influenza, mumps, HIV, and Herpes simplex type 2 (genital herpes). Fungal infections can affect the brain. The most common form of fungal meningitis is caused by the fungus Cryptococcus neoformans (found mainly in dirt and bird droppings). 6 It can be slow to develop and smolder for weeks. Although treatable, fungal meningitis often recurs in nearly half of affected persons. Parasitic causes include cysticercosis (a tapeworm infection in the brain), which is common in other parts of the world, as well as cerebral malaria. There are rare cases of amoebic meningitis, sometimes related to fresh water swimming, which can be rapidly fatal.
People who are suspected of having meningitis or encephalitis should receive immediate, aggressive medical treatment. Both diseases can progress quickly and have the potential to cause severe, irreversible neurological damage. Effective vaccines are available to prevent Haemophilus influenzae, pneumococcal and meningococcal meningitis. Two types of vaccines are available in the United States to help prevent meningococcal meningitis. The Centers for Disease Control and Prevention recommends vaccination with a meningococcal conjugate vaccine for all preteens and teens ages 11 to 12 years, with a booster dose at 16 years old. Teens and young adults (ages 16 through 23) also may be vaccinated with a serogroup B meningococcal vaccine. If meningococcal meningitis is diagnosed, people in close contact with an infected individual should be given preventative antibiotics. Early treatment of bacterial meningitis involves antibiotics that can cross the bloodbrain barrier (a lining of cells that keeps harmful micro-organisms and chemicals from entering the brain). Appropriate antibiotic treatment for most types of meningitis can greatly reduce the risk of dying from the disease. Anticonvulsants to prevent seizures and corticosteroids to reduce brain inflammation may be prescribed. Infected sinuses may need to be drained. Corticosteroids such as prednisone may be ordered to relieve brain pressure and swelling and to prevent hearing loss that is common in some forms of meningitis. Lyme disease, a bacterial infection, is treated with antibiotics. meningitis is rarely life threatening and no specific treatment is needed. Fungal meningitis is treated with intravenous antifungal medications. Antiviral drugs used to treat viral encephalitis include acyclovir and ganciclovir. For most encephalitis-causing viruses, no specific treatment is available. Autoimmune causes of encephalitis are treated with additional immunosuppressant drugs and screening for underlying tumors when appropriate. Acute disseminated encephalomyelitis, a non-infectious inflammatory brain disease mostly seen in children, is treated with steroids. Anticonvulsants may be prescribed to stop or prevent seizures. Corticosteroids can reduce brain swelling. Affected individuals with breathing difficulties may require artificial respiration. Once the acute illness is under control, comprehensive rehabilitation should include cognitive rehabilitation and physical, speech, and occupational therapy. Can meningitis and encephalitis be prevented? Effective vaccines are available to prevent some forms of meningitis. People should avoid sharing food, utensils, glasses, and other objects with someone who may be exposed to or have the infection. People should wash their hands often with soap and rinse under running water. People who live, work, or go to school with someone who has been diagnosed with bacterial meningitis may be asked to take antibiotics for a few days as a preventive measure. Also, people should limit outdoor activities at night, wear long-sleeved clothing when outdoors, use insect repellents that are most effective for that particular region of the country, and rid lawn and outdoor areas of free-standing pools of water, in which mosquitoes breed. What is the prognosis for these infections? Outcome generally depends on the particular infectious agent involved, the severity of the illness, and how quickly treatment is given. In most cases, people with very mild encephalitis or meningitis can make a full recovery, although the process may be slow. Individuals who experience only headache, fever, and stiff neck may recover in 2-4 weeks. Individuals with bacterial meningitis typically show some relief 48-72 hours following initial treatment but are more likely to experience complications caused by the disease. In more serious cases, these diseases can cause hearing and/or speech loss, blindness, permanent brain and nerve damage, behavioral changes, cognitive disabilities, lack of muscle control, seizures, and memory loss. These individuals may need long-term therapy, medication, and supportive care. The recovery from encephalitis is variable depending on the cause of the disease and extent of brain inflammation.
| How dangerous is meningitis? Pull your answer from the text below only.
Bacterial meningitis is a rare but potentially fatal disease. Several types of bacteria can first cause an upper respiratory tract infection and then travel through the bloodstream to the brain. The disease also can occur when certain bacteria invade the meninges directly. Bacterial meningitis can cause stroke, hearing loss, and permanent brain damage. Pneumococcal meningitis is the most common form of meningitis and is the most serious form of bacterial meningitis. Some 6,000 cases of pneumococcal meningitis are reported in the United States each year. The disease is caused by the bacterium Streptococcus pneumoniae, which also causes pneumonia, blood poisoning (septicemia), and ear and sinus infections. At particular risk are children under age 12 and adults with a weakened immune system. People who have had pneumococcal meningitis often suffer neurological damage ranging from deafness to severe brain damage. Meningococcal meningitis is caused by the bacterium Neisseria meningitudes. Each year in the United States about 2,600 people get this highly contagious disease. High-risk groups include infants under the age of 1 year, people with suppressed immune systems, travelers to foreign countries where the disease is endemic, and military recruits and others who reside in dormitories. Between 10-15 percent of cases are fatal, with another 10-15 percent causing brain damage and other serious side effects. 5 Haemophilus influenzae meningitis was at one time the most common form of bacterial meningitis. Fortunately, the Haemophilus influenzae b vaccine has greatly reduced the number of cases in the United States (see Treatment section). Those most at risk of getting this disease are children in child-care settings and children who do not have access to this vaccine. Other forms of bacterial meningitis include Listeria monocytogenes meningitis (in which certain foods such as unpasteurized dairy or deli meats are sometimes implicated); Escherichia coli meningitis, which is most common in elderly adults and newborns and may be transmitted to a baby through the birth canal; and Mycobacterium tuberculosis meningitis, a rare disease that occurs when the bacterium that causes tuberculosis attacks the meninges. Viral, or aseptic, meningitis is usually caused by enteroviruses—common viruses that enter the body through the mouth and travel to the brain and surrounding tissues where they multiply. Enteroviruses are present in mucus, saliva, and feces, and can be transmitted through direct contact with an infected person or an infected object or surface. Other viruses that cause meningitis include Varicella zoster (the virus that causes chicken pox and can appear decades later as shingles), influenza, mumps, HIV, and Herpes simplex type 2 (genital herpes). Fungal infections can affect the brain. The most common form of fungal meningitis is caused by the fungus Cryptococcus neoformans (found mainly in dirt and bird droppings). 6 It can be slow to develop and smolder for weeks. Although treatable, fungal meningitis often recurs in nearly half of affected persons. Parasitic causes include cysticercosis (a tapeworm infection in the brain), which is common in other parts of the world, as well as cerebral malaria. There are rare cases of amoebic meningitis, sometimes related to fresh water swimming, which can be rapidly fatal.
People who are suspected of having meningitis or encephalitis should receive immediate, aggressive medical treatment. Both diseases can progress quickly and have the potential to cause severe, irreversible neurological damage. Effective vaccines are available to prevent Haemophilus influenzae, pneumococcal and meningococcal meningitis. Two types of vaccines are available in the United States to help prevent meningococcal meningitis. The Centers for Disease Control and Prevention recommends vaccination with a meningococcal conjugate vaccine for all preteens and teens ages 11 to 12 years, with a booster dose at 16 years old. Teens and young adults (ages 16 through 23) also may be vaccinated with a serogroup B meningococcal vaccine. If meningococcal meningitis is diagnosed, people in close contact with an infected individual should be given preventative antibiotics. Early treatment of bacterial meningitis involves antibiotics that can cross the bloodbrain barrier (a lining of cells that keeps harmful micro-organisms and chemicals from entering the brain). Appropriate antibiotic treatment for most types of meningitis can greatly reduce the risk of dying from the disease. Anticonvulsants to prevent seizures and corticosteroids to reduce brain inflammation may be prescribed. Infected sinuses may need to be drained. Corticosteroids such as prednisone may be ordered to relieve brain pressure and swelling and to prevent hearing loss that is common in some forms of meningitis. Lyme disease, a bacterial infection, is treated with antibiotics. meningitis is rarely life threatening and no specific treatment is needed. Fungal meningitis is treated with intravenous antifungal medications. Antiviral drugs used to treat viral encephalitis include acyclovir and ganciclovir. For most encephalitis-causing viruses, no specific treatment is available. Autoimmune causes of encephalitis are treated with additional immunosuppressant drugs and screening for underlying tumors when appropriate. Acute disseminated encephalomyelitis, a non-infectious inflammatory brain disease mostly seen in children, is treated with steroids. Anticonvulsants may be prescribed to stop or prevent seizures. Corticosteroids can reduce brain swelling. Affected individuals with breathing difficulties may require artificial respiration. Once the acute illness is under control, comprehensive rehabilitation should include cognitive rehabilitation and physical, speech, and occupational therapy. Can meningitis and encephalitis be prevented? Effective vaccines are available to prevent some forms of meningitis. People should avoid sharing food, utensils, glasses, and other objects with someone who may be exposed to or have the infection. People should wash their hands often with soap and rinse under running water. People who live, work, or go to school with someone who has been diagnosed with bacterial meningitis may be asked to take antibiotics for a few days as a preventive measure. Also, people should limit outdoor activities at night, wear long-sleeved clothing when outdoors, use insect repellents that are most effective for that particular region of the country, and rid lawn and outdoor areas of free-standing pools of water, in which mosquitoes breed. What is the prognosis for these infections? Outcome generally depends on the particular infectious agent involved, the severity of the illness, and how quickly treatment is given. In most cases, people with very mild encephalitis or meningitis can make a full recovery, although the process may be slow. Individuals who experience only headache, fever, and stiff neck may recover in 2-4 weeks. Individuals with bacterial meningitis typically show some relief 48-72 hours following initial treatment but are more likely to experience complications caused by the disease. In more serious cases, these diseases can cause hearing and/or speech loss, blindness, permanent brain and nerve damage, behavioral changes, cognitive disabilities, lack of muscle control, seizures, and memory loss. These individuals may need long-term therapy, medication, and supportive care. The recovery from encephalitis is variable depending on the cause of the disease and extent of brain inflammation.
|
The response should only contain info from this text. | what are the cases where a it comes out the mouth? | WHILE the occipital and sincipital cerebral hernias form external
visible tumors in the occipital and naso-frontal regions respectively, we
find no external visible tumors in the basal hernias. As the sincipital
hernias, however, leave the cranium in close proximity to the place of
exit of the basal hernias, let us first review briefly the various forms of
sincipital hernias:
1. The naso-frontal hernias leave the cranium between the frontal and
nasal bones and form a tumor in the median line in the region of the
glabella.
2. The naso-ethmoidal hernias leave the cranium between the frontal
and nasal bones on the one side and the lateral mass or labyrinth on the
other, which is forced or displaced downward toward the nasal cavity.
The tumor appears externally in the region of the border between the
osseous and cartilaginous portions of the nose, hanging down toward the
tip or the wing of the nose.
3. The naso-orbital hernias leave the cranium between the frontal,
ethmoid and lachrymal bones. In the region of the latter they enter
the orbit and present at or near the inner canthus of the eye.
All the above-named varieties present external visible tumors. The
naso-ethmoidal and naso-orbital varieties are probably not distinguish-
able from each other, as they leave the cranium at the same place ;
namely, the nasal notch of the frontal and the cribriform plate of the
ethmoid bone. Furthermore, the same hernia may divide into two
branches, of which the anterior passes downward and forward behind
the nasal bone, to protrude in the face at the border of the osseous and
cartilaginous part of the nose, and the posterior branch descends into
the anterior and medial portion of the orbit between the frontal, eth-
moid and lachrymal bones. There is always some defect of the bones
in question at the point where the encephalocele leaves the cranium.
4. Basal hernias are, as already stated, distinguished from the other
sincipital hernias by not causing a protruding tumor in the face.
Heinecke* distinguishes between three forms of these hernias :
I. Cephalocele spheno-pharyngea is the most common variety, and
leaves the cranium through an opening between the body of the sphenoid
bone and the ethmoid bone, or through one of these bones, to come down
in the nasal or naso-pharyngeal cavity. Extending from this point they
may present in one of the nostrils, as in Czerny’s case; in the naso-
pharyngeal cavity as in the cases of Giraldés, Otto, and Klimentowsky,
cited from Larger,” and in my case, or come down into the mouth
through a cleft palate, as in the cases reported by Virchow, Lichten-
berg, Klintosch and Serres, also cited from Larger.
II. Cephalocele spheno-orbitalis, which leaves the cranium through
the superior orbital fissure to enter the orbit behind the globe of the eye.
III. Cephalocele spheno-maxillaris, which, like the second form, leaves
the cranium through the superior orbital fissure, but instead of remain-
ing in the posterior part of the orbit, descends through the inferior
orbital fissure into the spheno-maxillary fossa. The tumor presents, and
can be felt in the mouth on the medial side of the ascending ramus
of the inferior maxilla, and is visible on the outside of the face, on the
cheek below the zygoma, in the same place where the retro-maxillary
branches of retro-nasal fibroids present.
The two last-named hernias are exceedingly rare, and I have been
unable to find all the varieties to which Heinecke’s classification refers.
Larger mentions three instances of retro-orbital encephalocele referred
to by Spring. In the case published by Walther, the tumor descended
through the superior orbital fissure, and caused exophthalmos and
destruction of the eye. Spring had seen two similar specimens in the
museum at Bonn.
The first variety, the spheno-pharyngeal, is less uncommon. I shall
mention the more accurately described instances of this variety, as they
present more of surgical interest than Heinecke ascribed to them when
he said : “Cephalocele basalis is of no surgical importance, as it has been
found only in non-viable monsters (nicht lebensfahigen Missbildungen).”
Attempts at the removal of encephalocele by operation have been
made by Lichtenberg, Czerny and myself. Lichtenberg’s patient died
from the operation; Czerny’s patient survived the operation, but died
later from apparently independent causes; my patient made a definite
recovery.
Lichtenberg” reports the case of a newborn girl in whom a large
reddish tumor, the size of a small fist, hung out of the mouth, covering
the chin, with its base resting on the sternum. On more minute examin-
ation it was seen that the patient had a hare-lip situated nearly in the
median line of the lip, and complicated with cleft palate. The tumor
was divided into two portions by a slight constriction in the middle, was
elastic to the touch, and was attached by a pedicle which could be
followed up to the right wall of the nasal cavity by opening the mouth,
where it was continuous with the nasal mucosa. The patient died from
the operation, and the autopsy demonstrated that the tumor was a cere-
bral hernia.
Klintosch” gives a vague description of an infant in whom a
tumor protruded in the mouth. The patient had a hare-lip and cleft
palate, some bones of the face were wanting, and the eyes were atrophied.
In the sella Turcica was an opening the size of a goose-quill through
which the neck of the hernia came down into the mouth, there to form
a tumor the size of a hazelnut. This contained the hypophysis, which
was hollow and communicated directly with the ventricle.
Serres” describes an infant in whom some portions of the brain, with
their envelopes, protruded from the cranium in the median line between
- the sphenoid and ethmoid bones. The tumor descended into the nasal
fossa, almost into the pharynx.
Giraldés,’ according to Dupuytren,’ observed an encephalocele which
descended into the interior of the nose.
Otto,”* cited by Spring, states that he has seen in the museum at
Vienna a cerebral tumor which had penetrated into the nasal cavity
through the cribriform plate of the ethmoid.
Kelsch,’ according to Otto, has seen a case in which the hypophysis
was situated in the sphenoidal sinus.
Klimentowsky,’ describes an encephalocele in a newborn child, in
which the anterior portion of the two frontal lobes descended into the
right side of the nasal cavity, as was verified by the autopsy.
Rippmann," cited by Meyer, found in a foetus of twenty-three weeks,
the head of which was double the normal size, and consequently hydro-
cephalic, a lobulated tumor having a pedicle three or four lines in thick-
ness, which descended through a canal in the body of the sphenoid bone.
Virchow ™ describes a specimen in the Berlin museum, of hydrencepha-
locele palatina in a newborn child. (See Fig. 1.) From the open mouth
protruded an irregular nodulated tumor the size of a small apple. It
was apparently adherent to the hard palate, but upon section it was seen
that it had pushed both the vomer and the hard palate forward and up-
ward, and that it emerged from the cranial cavity through a broad open-
ing immediately anterior to the sphenoid bone, and behind the still carti-
laginous ethmoid. The anterior portion of the sphenoid was forced
downward and backward, and the connection between it and the vomer
interrupted by the tumor, so that the vomer was connected only with
the ethmoid. The anterior portion of the sac contained a cavity lined
with smooth dura mater, below and behind which were several irregular
smaller cavities. In the upper portion of the tumor was brain substance
which extended from this point up into the cerebral portion of the cranial
cavity. The brain was pushed downward toward the base of the cranial
cavity, and above it was a large cavity filled with fluid, and surrounded
by a thick membrane.
In addition to this more or less cursory discussion of cases from the
older literature, there has now appeared an accurate and excellent report
of a case by Meyer,” from Czerny’s clinic. The case was one of con-
genital nasal polypus, and was brought to the Heidelberg clinic for
operation. The child died six weeks later, and the diagnosis was made
after post-mortem microscopical examination.
The patient was a child three days old, well developed, weighing five
or six pounds. The left ala nasi was broadened and pushed upward by
a soft, elastic, compressible, pedunculated, transparent tumor the size of
a hazelnut, half of which protruded through the opening of the nose,
and was clad with smooth, yellowish-red mucous membrane, and covered
with dried crusts of serous exudate. The tumor did not increase in size
when the child cried; it was attached 14 cm. behind the free border of
the septum. Upon incision of the tumor bloody serum escaped, and
upon pressure puriform mucus was forced out.
| WHILE the occipital and sincipital cerebral hernias form external
visible tumors in the occipital and naso-frontal regions respectively, we
find no external visible tumors in the basal hernias. As the sincipital
hernias, however, leave the cranium in close proximity to the place of
exit of the basal hernias, let us first review briefly the various forms of
sincipital hernias:
1. The naso-frontal hernias leave the cranium between the frontal and
nasal bones and form a tumor in the median line in the region of the
glabella.
2. The naso-ethmoidal hernias leave the cranium between the frontal
and nasal bones on the one side and the lateral mass or labyrinth on the
other, which is forced or displaced downward toward the nasal cavity.
The tumor appears externally in the region of the border between the
osseous and cartilaginous portions of the nose, hanging down toward the
tip or the wing of the nose.
3. The naso-orbital hernias leave the cranium between the frontal,
ethmoid and lachrymal bones. In the region of the latter they enter
the orbit and present at or near the inner canthus of the eye.
All the above-named varieties present external visible tumors. The
naso-ethmoidal and naso-orbital varieties are probably not distinguish-
able from each other, as they leave the cranium at the same place ;
namely, the nasal notch of the frontal and the cribriform plate of the
ethmoid bone. Furthermore, the same hernia may divide into two
branches, of which the anterior passes downward and forward behind
the nasal bone, to protrude in the face at the border of the osseous and
cartilaginous part of the nose, and the posterior branch descends into
the anterior and medial portion of the orbit between the frontal, eth-
moid and lachrymal bones. There is always some defect of the bones
in question at the point where the encephalocele leaves the cranium.
4. Basal hernias are, as already stated, distinguished from the other
sincipital hernias by not causing a protruding tumor in the face.
Heinecke* distinguishes between three forms of these hernias :
I. Cephalocele spheno-pharyngea is the most common variety, and
leaves the cranium through an opening between the body of the sphenoid
bone and the ethmoid bone, or through one of these bones, to come down
in the nasal or naso-pharyngeal cavity. Extending from this point they
may present in one of the nostrils, as in Czerny’s case; in the naso-
pharyngeal cavity as in the cases of Giraldés, Otto, and Klimentowsky,
cited from Larger,” and in my case, or come down into the mouth
through a cleft palate, as in the cases reported by Virchow, Lichten-
berg, Klintosch and Serres, also cited from Larger.
II. Cephalocele spheno-orbitalis, which leaves the cranium through
the superior orbital fissure to enter the orbit behind the globe of the eye.
III. Cephalocele spheno-maxillaris, which, like the second form, leaves
the cranium through the superior orbital fissure, but instead of remain-
ing in the posterior part of the orbit, descends through the inferior
orbital fissure into the spheno-maxillary fossa. The tumor presents, and
can be felt in the mouth on the medial side of the ascending ramus
of the inferior maxilla, and is visible on the outside of the face, on the
cheek below the zygoma, in the same place where the retro-maxillary
branches of retro-nasal fibroids present.
The two last-named hernias are exceedingly rare, and I have been
unable to find all the varieties to which Heinecke’s classification refers.
Larger mentions three instances of retro-orbital encephalocele referred
to by Spring. In the case published by Walther, the tumor descended
through the superior orbital fissure, and caused exophthalmos and
destruction of the eye. Spring had seen two similar specimens in the
museum at Bonn.
The first variety, the spheno-pharyngeal, is less uncommon. I shall
mention the more accurately described instances of this variety, as they
present more of surgical interest than Heinecke ascribed to them when
he said : “Cephalocele basalis is of no surgical importance, as it has been
found only in non-viable monsters (nicht lebensfahigen Missbildungen).”
Attempts at the removal of encephalocele by operation have been
made by Lichtenberg, Czerny and myself. Lichtenberg’s patient died
from the operation; Czerny’s patient survived the operation, but died
later from apparently independent causes; my patient made a definite
recovery.
Lichtenberg” reports the case of a newborn girl in whom a large
reddish tumor, the size of a small fist, hung out of the mouth, covering
the chin, with its base resting on the sternum. On more minute examin-
ation it was seen that the patient had a hare-lip situated nearly in the
median line of the lip, and complicated with cleft palate. The tumor
was divided into two portions by a slight constriction in the middle, was
elastic to the touch, and was attached by a pedicle which could be
followed up to the right wall of the nasal cavity by opening the mouth,
where it was continuous with the nasal mucosa. The patient died from
the operation, and the autopsy demonstrated that the tumor was a cere-
bral hernia.
Klintosch” gives a vague description of an infant in whom a
tumor protruded in the mouth. The patient had a hare-lip and cleft
palate, some bones of the face were wanting, and the eyes were atrophied.
In the sella Turcica was an opening the size of a goose-quill through
which the neck of the hernia came down into the mouth, there to form
a tumor the size of a hazelnut. This contained the hypophysis, which
was hollow and communicated directly with the ventricle.
Serres” describes an infant in whom some portions of the brain, with
their envelopes, protruded from the cranium in the median line between
- the sphenoid and ethmoid bones. The tumor descended into the nasal
fossa, almost into the pharynx.
Giraldés,’ according to Dupuytren,’ observed an encephalocele which
descended into the interior of the nose.
Otto,”* cited by Spring, states that he has seen in the museum at
Vienna a cerebral tumor which had penetrated into the nasal cavity
through the cribriform plate of the ethmoid.
Kelsch,’ according to Otto, has seen a case in which the hypophysis
was situated in the sphenoidal sinus.
Klimentowsky,’ describes an encephalocele in a newborn child, in
which the anterior portion of the two frontal lobes descended into the
right side of the nasal cavity, as was verified by the autopsy.
Rippmann," cited by Meyer, found in a foetus of twenty-three weeks,
the head of which was double the normal size, and consequently hydro-
cephalic, a lobulated tumor having a pedicle three or four lines in thick-
ness, which descended through a canal in the body of the sphenoid bone.
Virchow ™ describes a specimen in the Berlin museum, of hydrencepha-
locele palatina in a newborn child. (See Fig. 1.) From the open mouth
protruded an irregular nodulated tumor the size of a small apple. It
was apparently adherent to the hard palate, but upon section it was seen
that it had pushed both the vomer and the hard palate forward and up-
ward, and that it emerged from the cranial cavity through a broad open-
ing immediately anterior to the sphenoid bone, and behind the still carti-
laginous ethmoid. The anterior portion of the sphenoid was forced
downward and backward, and the connection between it and the vomer
interrupted by the tumor, so that the vomer was connected only with
the ethmoid. The anterior portion of the sac contained a cavity lined
with smooth dura mater, below and behind which were several irregular
smaller cavities. In the upper portion of the tumor was brain substance
which extended from this point up into the cerebral portion of the cranial
cavity. The brain was pushed downward toward the base of the cranial
cavity, and above it was a large cavity filled with fluid, and surrounded
by a thick membrane.
In addition to this more or less cursory discussion of cases from the
older literature, there has now appeared an accurate and excellent report
of a case by Meyer,” from Czerny’s clinic. The case was one of con-
genital nasal polypus, and was brought to the Heidelberg clinic for
operation. The child died six weeks later, and the diagnosis was made
after post-mortem microscopical examination.
The patient was a child three days old, well developed, weighing five
or six pounds. The left ala nasi was broadened and pushed upward by
a soft, elastic, compressible, pedunculated, transparent tumor the size of
a hazelnut, half of which protruded through the opening of the nose,
and was clad with smooth, yellowish-red mucous membrane, and covered
with dried crusts of serous exudate. The tumor did not increase in size
when the child cried; it was attached 14 cm. behind the free border of
the septum. Upon incision of the tumor bloody serum escaped, and
upon pressure puriform mucus was forced out.
The response should only contain info from this text.
what are the cases where a it comes out the mouth? |
You can only use the provided text for information in your response. Answer in under 150 words. | How can I charge the patient? | Electronic Claim Submission via Clearinghouse
DentaQuest works directly with Emdeon (1-888-255-7293), Tesia 1-800-724-7240, EDI Health Group 1-800-576-6412, Secure EDI 1-877-466-9656 and Mercury Data Exchange 1-866-633-1090, for claim submissions to DentaQuest.
You can contact your software vendor and make certain that they have DentaQuest listed as the payer and claim mailing address on your electronic claim. Your software vendor will be able to provide you with any information you may need to ensure that submitted claims are forwarded to DentaQuest. DentaQuest’s Payor ID is CX014.
27.5 HIPAA Compliant 837DFile
For Providers who are unable to submit electronically via the Internet or a clearinghouse, DentaQuest will work directly with the Provider to receive their claims electronically via a HIPAA compliant 837D or 837P file from the Provider’s practice management system. 27.6 NPI Requirements for Submission of Electronic Claims
In accordance with the HIPAA guidelines, DentaQuest has adopted the following NPI standards in order to simplify the submission of claims from all of our providers, conform to industry required standards and increase the accuracy and efficiency of claims administered by DentaQuest.
• Providers must register for the appropriate NPI classification at the following website https://nppes.cms.hhs.gov/NPPES/Welcome.do and provide this information to DentaQuest in its entirety.
• All providers must register for an Individual NPI. You may also be required to register for a group NPI (or as part of a group) dependant upon your designation.
• When submitting claims to DentaQuest you must submit all forms of NPI properly and in their entirety for claims to be accepted and processed accurately. If you registered as part of a group, your claims must be submitted with both the Group and Individual NPI’s. These numbers are not interchangeable and could cause your claims to be returned to you as non-compliant.
• If you are presently submitting claims to DentaQuest through a clearinghouse or through a direct integration you need to review your integration to assure that it is in compliance with the revised HIPAA compliant 837D format. This information can be found on the 837D Companion Guide located on the Provider Web Portal.
27.7 Paper Claim Submission
• Claims must be submitted on 2018, 2019, or later ADA approved claim forms.
• Member name, identification number, and date of birth must be listed on all claims submitted. If the Member identification number is missing or miscoded on the claim form, the patient cannot be identified. This could result in the claim being returned to the submitting Provider office, causing a delay in payment.
• The paper claim must contain an acceptable provider signature.
• The Provider and office location information must be clearly identified on the claim. Frequently, if only the dentist signature is used for identification, the dentist’s name cannot be clearly identified. Please include either a typed dentist (practice) name or the DentaQuest Provider identificationnumber.
• The paper claim form must contain a valid provider NPI (National Provider Identification) number. In the event of not having this box on the claim form, the NPI must still be included on the form. The ADA claim form only supplies 2 fields to enter NPI. On paper claims, the Type 2 NPI identifies the payee, and may be submitted in conjunction with a Type 1 NPI to identify the dentist who provided the treatment. For example, on a standard ADA Dental Claim Form, the treating dentist’s NPI is entered in field 54 and the billing entity’s NPI is entered in field49.
• The date of service must be provided on the claim form for each service line submitted.
• Approved ADA dental codes as published in the current CDT book or as defined in this
manual must be used to define all services.
• List all quadrants, tooth numbers and surfaces for dental codes that necessitate identification (extractions, root canals, amalgams and resin fillings). Missing tooth and surface identification codes can result in the delay or denial of claim payment.
Affix the proper postage when mailing bulk documentation. DentaQuest does not accept postage due mail. This mail will be returned to the sender and will result in delay of payment.
Claims should be mailed to the following address:
DentaQuest- Claims PO Box 2906
Milwaukee, WI 53201-2906
For questions, providers may contact DentaQuest Provider Services at 844.776.8740.
27.8 Coordination of Benefits (COB)
Medicaid is the payer of last resort. Providers should ask Members if they have other dental insurance coverage at the time of their appointment. When Medicaid is the secondary insurance carrier, a copy of the primary carrier's Explanation of Benefits (EOB) must be submitted with the claim. For electronic claim submissions, the payment made by the primary carrier must be indicated in the appropriate COB field. When a primary carrier's payment meets or exceeds the Medicaid fee schedule, DentaQuest will consider the claim paid in full and no further payment will be made on the claim.
27.9 Member Billing Restrictions
Providers may not bill Members directly for Covered Services. DentaQuest reimburses only those services that are medically necessary and a Covered benefit in the respective program the Member is enrolled in. Medicaid Members do not have co-payments.
Member Acknowledgement Statement
A Provider may bill a Member for a claim denied as not being medically necessary or not a part of a Covered service if both of the following conditions are met:
• A specific service or item is provided at the request of the client
• If the Provider obtains a written waiver from the Member prior to rendering such
service. The Member Acknowledgment Statement reads as follows:
“I understand that, in the opinion of (Provider’s name), the services or items that I have requested to be provided to me on (dates of service) may not be covered under the Texas Medicaid Assistance Program as being reasonable and medically necessary for my care. I understand that DentaQuest through its contract with Superior and HHSC determines the medical necessity of the services or items that I request and receive. I also understand that I am responsible for payment of the services or items I request and receive if these services or items are determined not to be reasonable and medically necessary for my care.”
27.10 Private Pay Form (Non-Covered Services Disclosure Form)
There are instances when the dentist may bill the Member. For example, if the Provider accepts the Member as a private pay patient and informs the Member at the time of service that the Member will be responsible for payment for all services. In this situation, it is recommended that the Provider use a Private Pay Form. It is suggested that the Provider use the Member Acknowledgement Statement listed above as the Private Pay Form, or use the DentaQuest Non-Covered Services Disclosure Form. Without written, signed documentation that the Member has been properly notified of their private pay status, the Provider could not ask for payment from a Member. | Electronic Claim Submission via Clearinghouse
DentaQuest works directly with Emdeon (1-888-255-7293), Tesia 1-800-724-7240, EDI Health Group 1-800-576-6412, Secure EDI 1-877-466-9656 and Mercury Data Exchange 1-866-633-1090, for claim submissions to DentaQuest.
You can contact your software vendor and make certain that they have DentaQuest listed as the payer and claim mailing address on your electronic claim. Your software vendor will be able to provide you with any information you may need to ensure that submitted claims are forwarded to DentaQuest. DentaQuest’s Payor ID is CX014.
27.5 HIPAA Compliant 837DFile
For Providers who are unable to submit electronically via the Internet or a clearinghouse, DentaQuest will work directly with the Provider to receive their claims electronically via a HIPAA compliant 837D or 837P file from the Provider’s practice management system. 27.6 NPI Requirements for Submission of Electronic Claims
In accordance with the HIPAA guidelines, DentaQuest has adopted the following NPI standards in order to simplify the submission of claims from all of our providers, conform to industry required standards and increase the accuracy and efficiency of claims administered by DentaQuest.
• Providers must register for the appropriate NPI classification at the following website https://nppes.cms.hhs.gov/NPPES/Welcome.do and provide this information to DentaQuest in its entirety.
• All providers must register for an Individual NPI. You may also be required to register for a group NPI (or as part of a group) dependant upon your designation.
• When submitting claims to DentaQuest you must submit all forms of NPI properly and in their entirety for claims to be accepted and processed accurately. If you registered as part of a group, your claims must be submitted with both the Group and Individual NPI’s. These numbers are not interchangeable and could cause your claims to be returned to you as non-compliant.
• If you are presently submitting claims to DentaQuest through a clearinghouse or through a direct integration you need to review your integration to assure that it is in compliance with the revised HIPAA compliant 837D format. This information can be found on the 837D Companion Guide located on the Provider Web Portal.
27.7 Paper Claim Submission
• Claims must be submitted on 2018, 2019, or later ADA approved claim forms.
• Member name, identification number, and date of birth must be listed on all claims submitted. If the Member identification number is missing or miscoded on the claim form, the patient cannot be identified. This could result in the claim being returned to the submitting Provider office, causing a delay in payment.
• The paper claim must contain an acceptable provider signature.
• The Provider and office location information must be clearly identified on the claim. Frequently, if only the dentist signature is used for identification, the dentist’s name cannot be clearly identified. Please include either a typed dentist (practice) name or the DentaQuest Provider identificationnumber.
• The paper claim form must contain a valid provider NPI (National Provider Identification) number. In the event of not having this box on the claim form, the NPI must still be included on the form. The ADA claim form only supplies 2 fields to enter NPI. On paper claims, the Type 2 NPI identifies the payee, and may be submitted in conjunction with a Type 1 NPI to identify the dentist who provided the treatment. For example, on a standard ADA Dental Claim Form, the treating dentist’s NPI is entered in field 54 and the billing entity’s NPI is entered in field49.
• The date of service must be provided on the claim form for each service line submitted.
• Approved ADA dental codes as published in the current CDT book or as defined in this
manual must be used to define all services.
• List all quadrants, tooth numbers and surfaces for dental codes that necessitate identification (extractions, root canals, amalgams and resin fillings). Missing tooth and surface identification codes can result in the delay or denial of claim payment.
Affix the proper postage when mailing bulk documentation. DentaQuest does not accept postage due mail. This mail will be returned to the sender and will result in delay of payment.
Claims should be mailed to the following address:
DentaQuest- Claims PO Box 2906
Milwaukee, WI 53201-2906
For questions, providers may contact DentaQuest Provider Services at 844.776.8740.
27.8 Coordination of Benefits (COB)
Medicaid is the payer of last resort. Providers should ask Members if they have other dental insurance coverage at the time of their appointment. When Medicaid is the secondary insurance carrier, a copy of the primary carrier's Explanation of Benefits (EOB) must be submitted with the claim. For electronic claim submissions, the payment made by the primary carrier must be indicated in the appropriate COB field. When a primary carrier's payment meets or exceeds the Medicaid fee schedule, DentaQuest will consider the claim paid in full and no further payment will be made on the claim.
27.9 Member Billing Restrictions
Providers may not bill Members directly for Covered Services. DentaQuest reimburses only those services that are medically necessary and a Covered benefit in the respective program the Member is enrolled in. Medicaid Members do not have co-payments.
Member Acknowledgement Statement
A Provider may bill a Member for a claim denied as not being medically necessary or not a part of a Covered service if both of the following conditions are met:
• A specific service or item is provided at the request of the client
• If the Provider obtains a written waiver from the Member prior to rendering such
service. The Member Acknowledgment Statement reads as follows:
“I understand that, in the opinion of (Provider’s name), the services or items that I have requested to be provided to me on (dates of service) may not be covered under the Texas Medicaid Assistance Program as being reasonable and medically necessary for my care. I understand that DentaQuest through its contract with Superior and HHSC determines the medical necessity of the services or items that I request and receive. I also understand that I am responsible for payment of the services or items I request and receive if these services or items are determined not to be reasonable and medically necessary for my care.”
27.10 Private Pay Form (Non-Covered Services Disclosure Form)
There are instances when the dentist may bill the Member. For example, if the Provider accepts the Member as a private pay patient and informs the Member at the time of service that the Member will be responsible for payment for all services. In this situation, it is recommended that the Provider use a Private Pay Form. It is suggested that the Provider use the Member Acknowledgement Statement listed above as the Private Pay Form, or use the DentaQuest Non-Covered Services Disclosure Form. Without written, signed documentation that the Member has been properly notified of their private pay status, the Provider could not ask for payment from a Member.
How can I charge the patient? You can only use the provided text for information in your response. Answer in under 150 words. |
Draw your answer from the above text. | According to this document, what is an executer responsible for? | **Texas last will and testament requirements**
Here are the requirements for a valid will in Texas:
Your will must be “in writing,” meaning it exists in a physical form. For example, a will “in writing” can be one you’ve written by hand, or one you’ve typed on a computer and printed. A digital copy, like a PDF of your will saved on your computer, isn’t considered valid.
You must be at least 18 years old. This rule doesn’t apply if you’re married or serve in the military.
You must be of sound mind and memory. This means that you:
Understand what it means to make a will
Understand the nature and extent of your property and relationships
Are capable of making reasonable judgments about the matters your will controls (for example, naming a guardian for your minor children)
You must make your will freely and voluntarily. This means you shouldn’t be under improper pressure to write your will by someone who has power over you, like a caretaker or family member. This is known as “undue influence.”
You must sign your will in the presence of at least two credible witnesses, who also sign. According to the Texas Estates Code, your witnesses must be at least 14 years old. A witness is “credible” when they don’t receive any financial benefit under your will. In other words, your witnesses should be people who aren’t receiving anything from your will.
Do you need to notarize your will in Texas?
No — in Texas, you don’t need to notarize your will to make it valid. However, a notary is required if you want to make your will self-proving. When a will is self-proving, the court can accept your will without needing to contact your witnesses to prove its validity. This can speed up the probate process.
To make your will self-proving, you must include a self-proving affidavit. In it, you and your witnesses state that your will was signed by you in the witnesses’ presence, and that you’ve declared it to be your will. Your self-proving affidavit must be signed (or acknowledged) by both you and your witnesses in front of a notary, who will then notarize the affidavit.
Are holographic wills legal in Texas?
Holographic wills, also called handwritten wills, are accepted in Texas. To be valid, a holographic will must be written entirely in your handwriting and signed by you. As long as you follow these two requirements, you don’t need witnesses to make your holographic will valid. However, if you think someone could challenge the validity of your will, it’s a good idea to have them anyway.
Estate attorneys generally don’t recommend making a holographic will. They can be difficult to prove legally valid in court, and they may contain errors or unclear wishes. Learn more about the pitfalls of holographic wills, and alternative options you can use instead.
Texas will executor requirements
Your executor is the person responsible for managing your probate estate and carrying out the wishes described in your will. They will work with the probate court to pay your debts and distribute your assets to the beneficiaries of your will.
You can use your will to name the person (or people) you’d like to be your executor, but not everyone is qualified to serve.
For a person to be accepted by the Texas court as your executor, they must:
Be at least 18 years old
Be capable of performing their duties as executor
Have never been convicted of a felony
Be deemed “suitable” by the court
It’s often more practical to choose an executor that lives in Texas, and close to you. If you decide to nominate someone who lives out of state, they can only serve as your executor if they appoint a resident agent and notify the court. A resident agent is someone who lives in the state of Texas and accepts legal documentation on behalf of your executor and your estate.
Revoking or changing your will in Texas
Revoking your will
You can generally revoke, or nullify, your will in Texas at any time before your death, unless you’ve committed to an agreement stating you wouldn’t (for example, a joint will). There are a few ways you can nullify your will:
Intentionally destroy it. You can burn it, tear it, shred it, or throw it away.
Ask someone to destroy it for you in your presence.
Create a new one. Generally, a more recent will overrides any previous wills you’ve written. Be sure to include language stating explicitly that your new will is intended to revoke your prior will, and destroy all previous wills and codicils to avoid confusion.
Change your will with a codicil
If you’d like to make a few changes to your will, rather than revoking it altogether, you may consider writing a codicil. A codicil is a legal document that revises your existing will. To be legally effective, codicils must be executed and witnessed just like a will. In Texas, this means you must be of sound mind to make a codicil, and it must be signed by you and two witnesses.
Estate attorneys generally don’t recommend creating a codicil. It can be difficult to keep track of multiple documents, and codicils could make it more difficult to determine the will-maker’s wishes. In most cases, it may be safer to simply create a new will.
Probate in Texas
Probate is the legal process of gathering the assets of a deceased person and distributing them to that person’s beneficiaries. During probate, your executor will be responsible for preparing an inventory of your estate’s assets and managing those assets until they can be distributed. A court typically oversees the process to resolve any questions and disputes that might arise, make sure your remaining debts are paid, and ensure that your property is passed on to the right people or organizations.
Here’s a high-level overview of what happens during the probate process:
Someone, usually your executor or a family member, files your will (if you had one). In Texas, they have four years from the date of death to file your will.
The court validates your will.
The court appoints a representative, or executor, to oversee your estate.
Your executor identifies your assets and debts, and contacts your beneficiaries and creditors to notify them of your passing.
Your executor pays any of your debts, usually with money from your estate.
Your executor distributes assets to your beneficiaries, according to the wishes outlined in your will. If you didn’t have a will, your assets are distributed based on Texas’s intestate laws.
Independent administration vs. court-supervised administration
Texas’s probate process is known for being quick and simple due to a process called “independent administration.” Independent administration allows executors to take steps to settle the estate — like paying debts, selling property, and distributing assets — with minimal court supervision.
Court-supervised administration is also an option, although less common. The probate court is more involved during a court-supervised administration. Its approval is required for more of the proposed actions the executor may wish to take. Although this process takes longer and can be more expensive than an independent administration, it may be useful if the estate is particularly complicated, or if the estate’s beneficiaries don’t get along. In your will, you can indicate which type of administration your estate will receive.
Disinheriting an heir
In Texas, you can use your will to disinherit an heir, like an adult child or grandchild. This means you can prevent them from having the legal right to your property after you die. However, this doesn’t apply to your spouse. In Texas (and many states), there are laws in place that protect spouses from being disinherited without their consent. You can read more about these laws in the community property section below.
Is Texas a community property state?
Yes, Texas is a community property state. Community property states consider almost all assets acquired by either spouse during their marriage to belong to both spouses equally. In community property states like Texas, the surviving spouse is entitled to at least half of any community property, even if the deceased spouse wrote something different in their will.
To better understand Texas community property laws, it helps to understand the difference between personal and community property.
Personal property
Personal property is property that belongs to only one spouse. This can include:
Any assets or debts you acquire before your marriage
Any inheritance you receive during your marriage
Any assets specified in a prenuptial or postnuptial agreement
Personal property isn’t considered community property. This means you can use your will to leave it to anyone you want.
Community property
With few exceptions, any assets and debts that either you or your spouse acquire during your marriage are community property under Texas law. For example, this could be a vehicle your spouse purchased that has their name on the title, or the money you earned in your career during the years you were married. Each of you will have a one-half interest in each item of community property, and you will generally only be able to use your will to control who receives your one-half interest in that property — the other one-half interest remains the property of your spouse.
Many people choose to leave the majority of their estate to their spouse, regardless of whether they live in a community property state. If you want to leave a significant portion of your estate to someone other than your spouse for any reason, you should consider working with an estate attorney to discuss your situation and create an estate plan to meet your needs.
| {CONTEXT}
=======
**Texas last will and testament requirements**
Here are the requirements for a valid will in Texas:
Your will must be “in writing,” meaning it exists in a physical form. For example, a will “in writing” can be one you’ve written by hand, or one you’ve typed on a computer and printed. A digital copy, like a PDF of your will saved on your computer, isn’t considered valid.
You must be at least 18 years old. This rule doesn’t apply if you’re married or serve in the military.
You must be of sound mind and memory. This means that you:
Understand what it means to make a will
Understand the nature and extent of your property and relationships
Are capable of making reasonable judgments about the matters your will controls (for example, naming a guardian for your minor children)
You must make your will freely and voluntarily. This means you shouldn’t be under improper pressure to write your will by someone who has power over you, like a caretaker or family member. This is known as “undue influence.”
You must sign your will in the presence of at least two credible witnesses, who also sign. According to the Texas Estates Code, your witnesses must be at least 14 years old. A witness is “credible” when they don’t receive any financial benefit under your will. In other words, your witnesses should be people who aren’t receiving anything from your will.
Do you need to notarize your will in Texas?
No — in Texas, you don’t need to notarize your will to make it valid. However, a notary is required if you want to make your will self-proving. When a will is self-proving, the court can accept your will without needing to contact your witnesses to prove its validity. This can speed up the probate process.
To make your will self-proving, you must include a self-proving affidavit. In it, you and your witnesses state that your will was signed by you in the witnesses’ presence, and that you’ve declared it to be your will. Your self-proving affidavit must be signed (or acknowledged) by both you and your witnesses in front of a notary, who will then notarize the affidavit.
Are holographic wills legal in Texas?
Holographic wills, also called handwritten wills, are accepted in Texas. To be valid, a holographic will must be written entirely in your handwriting and signed by you. As long as you follow these two requirements, you don’t need witnesses to make your holographic will valid. However, if you think someone could challenge the validity of your will, it’s a good idea to have them anyway.
Estate attorneys generally don’t recommend making a holographic will. They can be difficult to prove legally valid in court, and they may contain errors or unclear wishes. Learn more about the pitfalls of holographic wills, and alternative options you can use instead.
Texas will executor requirements
Your executor is the person responsible for managing your probate estate and carrying out the wishes described in your will. They will work with the probate court to pay your debts and distribute your assets to the beneficiaries of your will.
You can use your will to name the person (or people) you’d like to be your executor, but not everyone is qualified to serve.
For a person to be accepted by the Texas court as your executor, they must:
Be at least 18 years old
Be capable of performing their duties as executor
Have never been convicted of a felony
Be deemed “suitable” by the court
It’s often more practical to choose an executor that lives in Texas, and close to you. If you decide to nominate someone who lives out of state, they can only serve as your executor if they appoint a resident agent and notify the court. A resident agent is someone who lives in the state of Texas and accepts legal documentation on behalf of your executor and your estate.
Revoking or changing your will in Texas
Revoking your will
You can generally revoke, or nullify, your will in Texas at any time before your death, unless you’ve committed to an agreement stating you wouldn’t (for example, a joint will). There are a few ways you can nullify your will:
Intentionally destroy it. You can burn it, tear it, shred it, or throw it away.
Ask someone to destroy it for you in your presence.
Create a new one. Generally, a more recent will overrides any previous wills you’ve written. Be sure to include language stating explicitly that your new will is intended to revoke your prior will, and destroy all previous wills and codicils to avoid confusion.
Change your will with a codicil
If you’d like to make a few changes to your will, rather than revoking it altogether, you may consider writing a codicil. A codicil is a legal document that revises your existing will. To be legally effective, codicils must be executed and witnessed just like a will. In Texas, this means you must be of sound mind to make a codicil, and it must be signed by you and two witnesses.
Estate attorneys generally don’t recommend creating a codicil. It can be difficult to keep track of multiple documents, and codicils could make it more difficult to determine the will-maker’s wishes. In most cases, it may be safer to simply create a new will.
Probate in Texas
Probate is the legal process of gathering the assets of a deceased person and distributing them to that person’s beneficiaries. During probate, your executor will be responsible for preparing an inventory of your estate’s assets and managing those assets until they can be distributed. A court typically oversees the process to resolve any questions and disputes that might arise, make sure your remaining debts are paid, and ensure that your property is passed on to the right people or organizations.
Here’s a high-level overview of what happens during the probate process:
Someone, usually your executor or a family member, files your will (if you had one). In Texas, they have four years from the date of death to file your will.
The court validates your will.
The court appoints a representative, or executor, to oversee your estate.
Your executor identifies your assets and debts, and contacts your beneficiaries and creditors to notify them of your passing.
Your executor pays any of your debts, usually with money from your estate.
Your executor distributes assets to your beneficiaries, according to the wishes outlined in your will. If you didn’t have a will, your assets are distributed based on Texas’s intestate laws.
Independent administration vs. court-supervised administration
Texas’s probate process is known for being quick and simple due to a process called “independent administration.” Independent administration allows executors to take steps to settle the estate — like paying debts, selling property, and distributing assets — with minimal court supervision.
Court-supervised administration is also an option, although less common. The probate court is more involved during a court-supervised administration. Its approval is required for more of the proposed actions the executor may wish to take. Although this process takes longer and can be more expensive than an independent administration, it may be useful if the estate is particularly complicated, or if the estate’s beneficiaries don’t get along. In your will, you can indicate which type of administration your estate will receive.
Disinheriting an heir
In Texas, you can use your will to disinherit an heir, like an adult child or grandchild. This means you can prevent them from having the legal right to your property after you die. However, this doesn’t apply to your spouse. In Texas (and many states), there are laws in place that protect spouses from being disinherited without their consent. You can read more about these laws in the community property section below.
Is Texas a community property state?
Yes, Texas is a community property state. Community property states consider almost all assets acquired by either spouse during their marriage to belong to both spouses equally. In community property states like Texas, the surviving spouse is entitled to at least half of any community property, even if the deceased spouse wrote something different in their will.
To better understand Texas community property laws, it helps to understand the difference between personal and community property.
Personal property
Personal property is property that belongs to only one spouse. This can include:
Any assets or debts you acquire before your marriage
Any inheritance you receive during your marriage
Any assets specified in a prenuptial or postnuptial agreement
Personal property isn’t considered community property. This means you can use your will to leave it to anyone you want.
Community property
With few exceptions, any assets and debts that either you or your spouse acquire during your marriage are community property under Texas law. For example, this could be a vehicle your spouse purchased that has their name on the title, or the money you earned in your career during the years you were married. Each of you will have a one-half interest in each item of community property, and you will generally only be able to use your will to control who receives your one-half interest in that property — the other one-half interest remains the property of your spouse.
Many people choose to leave the majority of their estate to their spouse, regardless of whether they live in a community property state. If you want to leave a significant portion of your estate to someone other than your spouse for any reason, you should consider working with an estate attorney to discuss your situation and create an estate plan to meet your needs.
{QUESTION}
=======
According to this document, what is an executer responsible for?
{INSTRUCTION}
=======
Draw your answer from the above text. |
{instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
[user request]
{passage 0}
==========
[context document] | Given that churn exists in the subscription application game, why is it beneficial at times to promote and create this churn. Explain this in 300 words. | Consumer subscription apps are relatively easy to launch, which is why there are hundreds of thousands of them in the app stores. Compared with more complex models like B2B SaaS or marketplaces, these businesses can launch faster with less capital for many reasons: no sales teams, rapid purchasing cycles, high gross margins with low marginal costs to serving additional subscribers, and turnkey global distribution, payments, and support tools through the app stores.
But these apps face several fundamental challenges that make them very hard to scale:
Lack of control over distribution: The Apple and Google app stores exert significant control over product placements, promotions, TOS and usage guidelines, payments, and cancellation terms. This locks consumer subscription apps into paying expensive app store fees and restricts their ability to distribute and monetize their products.
Overdependence on paid acquisition: Since consumer subscription apps don’t have sales teams and often can’t rely on virality as much as social networks or marketplaces, many turn to paid acquisition as their primary growth lever. This strategy always had flaws, and Apple’s App Tracking Transparency (ATT) restrictions have only made it harder.
High subscriber churn rates: Churn is generally higher for consumer subscription apps vs. B2B SaaS businesses, and most don’t benefit from strong network effects like social networks or marketplaces. This makes products less sticky and retention more difficult. Many also have products and growth strategies that are fairly easy to replicate and thus prone to copycats, weakening their defensibility and further exacerbating churn rates.
ARPU is often low and hard to grow: Consumer subscription apps generally have much lower Average Revenue per User (ARPU) vs. B2B SaaS, and they have a harder time expanding ARPU than many other business models. B2B SaaS businesses increase Net Revenue Retention (NRR) by growing the value of retained accounts to offset churn. Marketplaces can boost ARPU by increasing transactions as they gain liquidity. Social networks grow ad revenue by increasing user engagement. But most consumer subscription apps offer only one subscription, which means they have a hard time extracting additional value from users.
In fact, it is because consumer subscription apps are relatively easy to launch that so many exist, leading to fierce competition, channel saturation, and subscription fatigue. Meanwhile, Apple’s recent ATT restrictions have rendered paid acquisition less efficient, making it even more difficult for these companies to maintain healthy unit economics. This explains why out of all the consumer subscription apps out there, fewer than 50 have ever reached $1B+ valuations, and fewer than 10 are publicly traded companies with $10B+ market caps.
Public company market caps are from 8/30/24. Private company valuations are based on the most recent publicly available data, which may be out of date relative to internal valuations. Public companies like Bumble, Stitch Fix, and Chegg that once had market caps over $1B are included even though their current market caps are below $1B. Companies like Canva, Grammarly, Figma, Notion, and Dropbox are excluded because they are considered B2B SaaS businesses since they have sales teams and sell to both prosumers and enterprises. Products like ChatGPT, Hulu, ESPN+, Disney+, and Pandora that are subsidiaries of larger companies are excluded.
These challenges are supported by RevenueCat’s proprietary data from the past year, which has been aggregated from over 30,000 subscription apps accounting for over 290M subscribers:
Even top-quartile consumer subscription apps only convert roughly 1 in 20 installs into a paid subscription. They also lose more than half of their annual subscribers after the first year, and more than half of their monthly subscribers after just three months. This makes it hard to build a sustainable business, but not impossible. 95th-percentile apps, like those in the figure above, outperform the rest by a wide margin, with metrics that provide a strong foundation for growth.
So what makes these top apps different?
Using the Subscription Value Loop to grow your consumer subscription business
The best consumer subscription apps overcome these challenges by doing two things:
1. Building their businesses on a core value promise that provides enduring value. This core value promise is what attracts users to the app and keeps them coming back over time. The stronger and more differentiated the value promise, the more subscribers will pay and the longer they’ll keep paying. The value promises of category-leading consumer subscription apps are clear and compelling, to the point where I can describe these promises in a single sentence without naming the associated companies and you can probably guess the ones I am talking about:
Listen to music you love, build playlists, and find new artists who match your taste.
Enjoy gamified study experiences that make language learning fun.
Record and share your workouts with a supportive community of athletes.
Find, match, and connect with attractive single people in your area with one swipe.
2. Harnessing their value promise to drive a compounding Subscription Value Loop that increases LTV/CAC and accelerates Payback Period:
Step 1: Value Creation: Quickly connecting new users to the app’s core value promise and offering enduring value that keeps them coming back
Step 2: Value Delivery: Cost-efficiently distributing the app to users organically through word of mouth and SEO, as well as through sustainable paid acquisition
Step 3: Value Capture: Converting free users into subscribers, which generates revenue that can be reinvested into the business to strengthen the rest of the loop
As a company bolsters its Subscription Value Loop, LTV/CAC goes up and Payback Period comes down, driving faster and more efficient growth. What makes consumer subscription apps unusual is that most don’t have sales teams, which means the product must be able to sell itself organically and through paid advertisements. Core product, growth product, and marketing teams must work together to build an integrated system that converts their app’s core value promise into subscription revenue by maximizing each step in the Subscription Value Loop.
I built a Subscription Value Loop Calculator with RevenueCat where you can plug in your numbers, measure the performance of your loop, and identify growth opportunities.
There are five steps to using this tool:
Identify metrics that drive Value Creation, Value Delivery, and Value Capture for your app.
Calculate your company’s recent performance against each metric.
Compare your performance on each metric vs. category-specific benchmarks.
Discover opportunities based on metrics where you underperform vs. these benchmarks.
Prioritize initiatives to improve the metrics with the greatest upside potential.
For a detailed step-by-step guide on how to use this tool | {instruction}
==========
In your answer, refer only to the context document. Do not employ any outside knowledge
{question}
==========
Given that churn exists in the subscription application game, why is it beneficial at times to promote and create this churn. Explain this in 300 words.
{passage 0}
==========
Consumer subscription apps are relatively easy to launch, which is why there are hundreds of thousands of them in the app stores. Compared with more complex models like B2B SaaS or marketplaces, these businesses can launch faster with less capital for many reasons: no sales teams, rapid purchasing cycles, high gross margins with low marginal costs to serving additional subscribers, and turnkey global distribution, payments, and support tools through the app stores.
But these apps face several fundamental challenges that make them very hard to scale:
Lack of control over distribution: The Apple and Google app stores exert significant control over product placements, promotions, TOS and usage guidelines, payments, and cancellation terms. This locks consumer subscription apps into paying expensive app store fees and restricts their ability to distribute and monetize their products.
Overdependence on paid acquisition: Since consumer subscription apps don’t have sales teams and often can’t rely on virality as much as social networks or marketplaces, many turn to paid acquisition as their primary growth lever. This strategy always had flaws, and Apple’s App Tracking Transparency (ATT) restrictions have only made it harder.
High subscriber churn rates: Churn is generally higher for consumer subscription apps vs. B2B SaaS businesses, and most don’t benefit from strong network effects like social networks or marketplaces. This makes products less sticky and retention more difficult. Many also have products and growth strategies that are fairly easy to replicate and thus prone to copycats, weakening their defensibility and further exacerbating churn rates.
ARPU is often low and hard to grow: Consumer subscription apps generally have much lower Average Revenue per User (ARPU) vs. B2B SaaS, and they have a harder time expanding ARPU than many other business models. B2B SaaS businesses increase Net Revenue Retention (NRR) by growing the value of retained accounts to offset churn. Marketplaces can boost ARPU by increasing transactions as they gain liquidity. Social networks grow ad revenue by increasing user engagement. But most consumer subscription apps offer only one subscription, which means they have a hard time extracting additional value from users.
In fact, it is because consumer subscription apps are relatively easy to launch that so many exist, leading to fierce competition, channel saturation, and subscription fatigue. Meanwhile, Apple’s recent ATT restrictions have rendered paid acquisition less efficient, making it even more difficult for these companies to maintain healthy unit economics. This explains why out of all the consumer subscription apps out there, fewer than 50 have ever reached $1B+ valuations, and fewer than 10 are publicly traded companies with $10B+ market caps.
Public company market caps are from 8/30/24. Private company valuations are based on the most recent publicly available data, which may be out of date relative to internal valuations. Public companies like Bumble, Stitch Fix, and Chegg that once had market caps over $1B are included even though their current market caps are below $1B. Companies like Canva, Grammarly, Figma, Notion, and Dropbox are excluded because they are considered B2B SaaS businesses since they have sales teams and sell to both prosumers and enterprises. Products like ChatGPT, Hulu, ESPN+, Disney+, and Pandora that are subsidiaries of larger companies are excluded.
These challenges are supported by RevenueCat’s proprietary data from the past year, which has been aggregated from over 30,000 subscription apps accounting for over 290M subscribers:
Even top-quartile consumer subscription apps only convert roughly 1 in 20 installs into a paid subscription. They also lose more than half of their annual subscribers after the first year, and more than half of their monthly subscribers after just three months. This makes it hard to build a sustainable business, but not impossible. 95th-percentile apps, like those in the figure above, outperform the rest by a wide margin, with metrics that provide a strong foundation for growth.
So what makes these top apps different?
Using the Subscription Value Loop to grow your consumer subscription business
The best consumer subscription apps overcome these challenges by doing two things:
1. Building their businesses on a core value promise that provides enduring value. This core value promise is what attracts users to the app and keeps them coming back over time. The stronger and more differentiated the value promise, the more subscribers will pay and the longer they’ll keep paying. The value promises of category-leading consumer subscription apps are clear and compelling, to the point where I can describe these promises in a single sentence without naming the associated companies and you can probably guess the ones I am talking about:
Listen to music you love, build playlists, and find new artists who match your taste.
Enjoy gamified study experiences that make language learning fun.
Record and share your workouts with a supportive community of athletes.
Find, match, and connect with attractive single people in your area with one swipe.
2. Harnessing their value promise to drive a compounding Subscription Value Loop that increases LTV/CAC and accelerates Payback Period:
Step 1: Value Creation: Quickly connecting new users to the app’s core value promise and offering enduring value that keeps them coming back
Step 2: Value Delivery: Cost-efficiently distributing the app to users organically through word of mouth and SEO, as well as through sustainable paid acquisition
Step 3: Value Capture: Converting free users into subscribers, which generates revenue that can be reinvested into the business to strengthen the rest of the loop
As a company bolsters its Subscription Value Loop, LTV/CAC goes up and Payback Period comes down, driving faster and more efficient growth. What makes consumer subscription apps unusual is that most don’t have sales teams, which means the product must be able to sell itself organically and through paid advertisements. Core product, growth product, and marketing teams must work together to build an integrated system that converts their app’s core value promise into subscription revenue by maximizing each step in the Subscription Value Loop.
I built a Subscription Value Loop Calculator with RevenueCat where you can plug in your numbers, measure the performance of your loop, and identify growth opportunities.
There are five steps to using this tool:
Identify metrics that drive Value Creation, Value Delivery, and Value Capture for your app.
Calculate your company’s recent performance against each metric.
Compare your performance on each metric vs. category-specific benchmarks.
Discover opportunities based on metrics where you underperform vs. these benchmarks.
Prioritize initiatives to improve the metrics with the greatest upside potential.
For a detailed step-by-step guide on how to use this tool
https://www.lennysnewsletter.com/p/the-subscription-value-loop-a-framework |
You can only respond using information in the context block. List 5 bullet points. | What are the main points of this passage? | States and local governments traditionally lead U.S. economic development efforts, with the federal
government selectively intervening to address significant need. However, the 2019 Coronavirus Disease
(COVID-19) pandemic has caused pervasive social and economic dislocation and extreme subnational
fiscal stress, straining existing federal economic development structures. This Insight examines current
federal economic development policy and outlines various options for addressing a potentially lengthy
pandemic recovery, or future such long-term challenges.
Federal Economic Development and COVID-19
The nationwide scope and protracted time horizon of the COVID-19 pandemic has challenged the
existing economic development infrastructure at all levels of government. This system is not designed or
arguably equipped to address scenarios in which otherwise unusual distress is endemic, and state and
local governments are acutely constrained by both the scale of the crisis as well as fiscal limitations.
The Federal Approach: Distress-Based Interventions
In the United States’ federal system, economic development activities are primarily the responsibility of
state and local governments, which fund various programs that may include business relocation and
retention incentives, workforce development, and other policies that stimulate growth and job creation.
State and local governments are also the primary agents (sometimes with the support of federal funding)
in other economic development-related activities—such as improvements to general infrastructure,
housing, community facilities, land use, education, and public safety. Those unmet needs not fully
addressed at the state and local levels, particularly in economically distressed or disadvantaged
communities, are targeted through federal economic development interventions.
Most funding programs provided by the principal federal economic development agencies—the
Department of Housing and Urban Development (HUD), the Economic Development Administration
(EDA), the Department of Agriculture (USDA), and the federal regional commissions and authorities—
prioritize economic development resources for communities exhibiting acute socioeconomic distress. For
Congressional Research Service
https://crsreports.congress.gov
IN11587
Congressional Research Service 2
example, HUD’s flagship Community Development Block Grant (CDBG) program is targeted at low- and
moderate-income individuals in predominantly urban places. The EDA utilizes distress criteria, and has
historically focused on rural and other non-urban places alongside USDA’s rural development programs.
The federal regional commissions and authorities employ taxonomies of distress in delineated geographic
service areas to prioritize their economic development activities. In addition, federal tax incentives for
economic development—such as the New Markets Tax Credit and Opportunity Zones—prioritize areas
shown to demonstrate high levels of economic distress.
Economic Development in a Time of COVID
The efficacy of the federal distress-based approach to economic development is broadly conditioned on
state and local governments’ ability to conduct more general economic development. In situations of
acute short-term disruption, such as a localized natural disaster or emergency, the federal government can
utilize its economic development and emergency management toolkit to support state and local
governments, organizations and businesses, and individuals with recovery.
However, the pandemic’s scale and longevity has challenged the existing federal economic development
and emergency management apparatus. In response, Congress has provided emergency supplemental
appropriations to increase the capacity of existing federal economic development infrastructure and
support temporary capabilities—such as the Federal Reserve’s direct lending programs, supplemental
unemployment insurance, stimulus cash payments, and the extended deployment of various short-term
emergency management authorities and countermeasures.
Despite congressional action, the pandemic has contributed to surges in poverty, food and housing
insecurity, waves of business closures, and a sharp annual decline in growth, indicating the limits of
federal economic development approaches.
Policy Options for Congress
Congress may consider policy options for adapting federal economic development tools to address highimpact events with extended or indefinite time horizons (e.g., pandemics, climate/weather-related
disasters, or manmade emergencies), such as:
Increasing funding for HUD’s CDBG program, and providing additional grantee
discretion for addressing distress not necessarily captured in CDBG’s current national
objectives—such as fiscal and public health;
Permanently authorizing broad-based relief tools like CDBG authorities for disaster
recovery (CDBG-DR), or a CARES Act Coronavirus Relief Fund-type analogue, that
could draw from a “no-year” strategic account similar to the Disaster Relief Fund;
Developing a standing fiscal support function for states as well as localities, potentially
based on an expanded Community Disaster Loan-type program;
Building on the federal regional commissions model, providing a framework for
establishing and resourcing intergovernmental federal-state regional commissions
throughout the United States as the principal loci of regional economic development, like
once provided under Title V of the Public Works and Economic Development Act of
1965 (“Title V” commissions);
Developing authorities for targeted basic income and “job corps” workforce programs,
which could be rapidly activated and expanded during emergencies to provide cash relief
to affected individuals and fill urgent labor needs (such as contact tracers and medical
auxiliaries during the pandemic); and
Congressional Research Service 3
IN11587 · VERSION 3 · NEW
Establishing a permanent interagency infrastructure to plan and coordinate industrial
mobilization and support, using the Defense Production Act (DPA) and other emergency
authorities, to respond to future social and economic dislocations.
Congress may also consider policies to strengthen and revise the national approach to economic
development generally, including:
An integrated, intergovernmental economic development framework where federal, state,
and local governments coordinate on planning, priorities, and funding;
A greater emphasis on cultivating business development and job growth regionally
(“economic gardening”), and shifting from incentive-driven regional competition to
regional clusters of comparative advantage in a global economy; and
Developing industrial policies that promote the development of strategic industries and
supply chains—beyond the defense industrial base—and drive investments in domestic
(and certain allied) supply chains anticipating various possible contingency scenarios.
Congress may also take steps to broaden the impacts of these reforms, such as by utilizing reinsurance
markets for a permanent CDBG-DR-type program; authorizing federal regional commissions to issue
bonds for strategic projects; broader adoption of federal loan and loan guarantee mechanisms in lieu of
some grants; and taking equity positions as part of direct investments, including potentially in DPA Title
III projects. | What are the main points of this passage?
You can only respond using information in the passage. List your answers in 5 bullet points with short explanations after. These explanations cannot be longer than 30 words.
States and local governments traditionally lead U.S. economic development efforts, with the federal
government selectively intervening to address significant need. However, the 2019 Coronavirus Disease
(COVID-19) pandemic has caused pervasive social and economic dislocation and extreme subnational
fiscal stress, straining existing federal economic development structures. This Insight examines current
federal economic development policy and outlines various options for addressing a potentially lengthy
pandemic recovery, or future such long-term challenges.
Federal Economic Development and COVID-19
The nationwide scope and protracted time horizon of the COVID-19 pandemic has challenged the
existing economic development infrastructure at all levels of government. This system is not designed or
arguably equipped to address scenarios in which otherwise unusual distress is endemic, and state and
local governments are acutely constrained by both the scale of the crisis as well as fiscal limitations.
The Federal Approach: Distress-Based Interventions
In the United States’ federal system, economic development activities are primarily the responsibility of
state and local governments, which fund various programs that may include business relocation and
retention incentives, workforce development, and other policies that stimulate growth and job creation.
State and local governments are also the primary agents (sometimes with the support of federal funding)
in other economic development-related activities—such as improvements to general infrastructure,
housing, community facilities, land use, education, and public safety. Those unmet needs not fully
addressed at the state and local levels, particularly in economically distressed or disadvantaged
communities, are targeted through federal economic development interventions.
Most funding programs provided by the principal federal economic development agencies—the
Department of Housing and Urban Development (HUD), the Economic Development Administration
(EDA), the Department of Agriculture (USDA), and the federal regional commissions and authorities—
prioritize economic development resources for communities exhibiting acute socioeconomic distress. For
Congressional Research Service
https://crsreports.congress.gov
IN11587
Congressional Research Service 2
example, HUD’s flagship Community Development Block Grant (CDBG) program is targeted at low- and
moderate-income individuals in predominantly urban places. The EDA utilizes distress criteria, and has
historically focused on rural and other non-urban places alongside USDA’s rural development programs.
The federal regional commissions and authorities employ taxonomies of distress in delineated geographic
service areas to prioritize their economic development activities. In addition, federal tax incentives for
economic development—such as the New Markets Tax Credit and Opportunity Zones—prioritize areas
shown to demonstrate high levels of economic distress.
Economic Development in a Time of COVID
The efficacy of the federal distress-based approach to economic development is broadly conditioned on
state and local governments’ ability to conduct more general economic development. In situations of
acute short-term disruption, such as a localized natural disaster or emergency, the federal government can
utilize its economic development and emergency management toolkit to support state and local
governments, organizations and businesses, and individuals with recovery.
However, the pandemic’s scale and longevity has challenged the existing federal economic development
and emergency management apparatus. In response, Congress has provided emergency supplemental
appropriations to increase the capacity of existing federal economic development infrastructure and
support temporary capabilities—such as the Federal Reserve’s direct lending programs, supplemental
unemployment insurance, stimulus cash payments, and the extended deployment of various short-term
emergency management authorities and countermeasures.
Despite congressional action, the pandemic has contributed to surges in poverty, food and housing
insecurity, waves of business closures, and a sharp annual decline in growth, indicating the limits of
federal economic development approaches.
Policy Options for Congress
Congress may consider policy options for adapting federal economic development tools to address highimpact events with extended or indefinite time horizons (e.g., pandemics, climate/weather-related
disasters, or manmade emergencies), such as:
Increasing funding for HUD’s CDBG program, and providing additional grantee
discretion for addressing distress not necessarily captured in CDBG’s current national
objectives—such as fiscal and public health;
Permanently authorizing broad-based relief tools like CDBG authorities for disaster
recovery (CDBG-DR), or a CARES Act Coronavirus Relief Fund-type analogue, that
could draw from a “no-year” strategic account similar to the Disaster Relief Fund;
Developing a standing fiscal support function for states as well as localities, potentially
based on an expanded Community Disaster Loan-type program;
Building on the federal regional commissions model, providing a framework for
establishing and resourcing intergovernmental federal-state regional commissions
throughout the United States as the principal loci of regional economic development, like
once provided under Title V of the Public Works and Economic Development Act of
1965 (“Title V” commissions);
Developing authorities for targeted basic income and “job corps” workforce programs,
which could be rapidly activated and expanded during emergencies to provide cash relief
to affected individuals and fill urgent labor needs (such as contact tracers and medical
auxiliaries during the pandemic); and
Congressional Research Service 3
IN11587 · VERSION 3 · NEW
Establishing a permanent interagency infrastructure to plan and coordinate industrial
mobilization and support, using the Defense Production Act (DPA) and other emergency
authorities, to respond to future social and economic dislocations.
Congress may also consider policies to strengthen and revise the national approach to economic
development generally, including:
An integrated, intergovernmental economic development framework where federal, state,
and local governments coordinate on planning, priorities, and funding;
A greater emphasis on cultivating business development and job growth regionally
(“economic gardening”), and shifting from incentive-driven regional competition to
regional clusters of comparative advantage in a global economy; and
Developing industrial policies that promote the development of strategic industries and
supply chains—beyond the defense industrial base—and drive investments in domestic
(and certain allied) supply chains anticipating various possible contingency scenarios.
Congress may also take steps to broaden the impacts of these reforms, such as by utilizing reinsurance
markets for a permanent CDBG-DR-type program; authorizing federal regional commissions to issue
bonds for strategic projects; broader adoption of federal loan and loan guarantee mechanisms in lieu of
some grants; and taking equity positions as part of direct investments, including potentially in DPA Title
III projects. |
[question]
[user request]
=====================
[text]
[context document]
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. | What about composites, like what the Titan was made of, make them unsafe in deep water, and why do they design subs with new advanced materials and composites when we already know steel works fine? Also what are some of the actual materials they use? | Several new technologies have been introduced in the submersibles developed during this period compared to the submersibles developed during the first period. The technical characteristics of these submersibles can be summarized as follows: (a) Use of a solid buoyancy material. This change allows for a more compact design and eliminates the need for large gasoline tanks, contributing significantly to the miniaturization of submersibles. (b) Use of ultra-high-strength steel and lightweight metals such as maraging steel, aluminum, and titanium. These materials offer a combination of strength and lightness, enabling the construction of smaller and more maneuverable submersibles and reducing manufacturing and operating costs. As a result of these advancements, submersible technology has been significantly improved in terms of miniaturization, cost reduction, and increased production. This has led to a larger number of submersibles being built in many countries, reflecting the widespread adoption and utilization of this technology.
Pressure hull
The primary design objective of the pressure hull in submersibles is to achieve a balance between reducing the hull weight and increasing the internal volume while ensuring structural strength and stability [19]. This balance is crucial because it directly affects the payload capacity of submersible. The weight displacement ratio is a key factor influenced by the shape and materials used in constructing the pressure hull. The weight displacement ratio refers to the ratio of the weight of the submersible (including its equipment, payload, and crew) to the volume of water displaced by the submersible hull [8]. To maximize the payload capacity, submersible designers aim to minimize the weight displacement ratio. This can be achieved through careful consideration of the shape and materials used in the construction of the pressure hull. The shape of the hull should be optimized to reduce drag and improve hydrodynamics, while the materials used should be strong and lightweight. By reducing the weight displacement ratio, submersibles can provide a greater payload capacity, allowing for the inclusion of more equipment, sensors, and scientific instruments. This, in turn, enhances the submersible’s capabilities for various applications, such as scientific research, underwater exploration, and deep-sea operations.
Regarding the shape, pressure hulls in submersibles often take on conventional forms such as spherical, cylindrical, or a combination of these shapes. Spherical pressure hulls are commonly used in large depth manned submersibles such as New Alvin, Nautile, MIR I, MIR II, Shinkai 6500, Jiaolong, Shenhaiyongshi, Fendouzhe, and Limiting Factor. The spherical shape provides structural strength and evenly distributes external pressure, making it suitable for withstanding extreme depths. Additionally, some small manned submersibles used for scientific research, such as Triton, c-explorer3, and Deep Flight Dragon, are also spherically shaped for similar reasons. Cylindrical pressure hulls are relatively easy to process and manufacture, and they offer high utilization of internal space. This shape is commonly adopted by large sightseeing submersibles and small private submersibles such as Atlantis, MarkII, Aurora-5, and C-Explorer5. The cylindrical shape allows for the efficient use of space for passengers or equipment while maintaining structural integrity. The lotus-root shape represents a novel pressure hull structure consisting of a series of intersecting spherical shells [20]. This shape can consist of double, triple, quadruple, or multiple intersecting spherical shells. The AURORA-6 submersible is based on this innovative lotus-root shape, which provides increased strength and stability while maximizing the internal volume. Selecting a suitable pressure hull shape depends on various factors, including the intended purpose, depth requirements, structural considerations, and design objectives of the submersible. Each shape offers unique advantages and considerations in terms of structural strength, internal space utilization, and overall performance.
In terms of materials, special marine environments impose greater requirements on the pressure hull materials of submersibles. The materials used in submersibles can be categorized into two types: metallic and nonmetallic materials [21]. Metallic materials commonly include ultra-high strength steel, aluminum alloys, and titanium alloys [8]. For instance, the Mir1 and Mir2 submersibles were built with pressure hulls made from ultra-high strengh steel. On the other hand, underwater gliders such as Spray Glider, Seaglider, Slocum, and PETREL were built with pressure hulls made from aluminum alloys. Notable submersibles such as Nautile, Shinakai, Alvin, New Alvin, Limiting Factor, Jiaolong, Fendouzhe, and Shenhaiyongshi were all built with pressured hulls made from titanium alloys. Nonmetallic materials mainly consist of structural ceramics, advanced polymer matrix composites [22], and organic glass. In the Nereus submersible, structural ceramics were utilized as the buoyancy material. Submersibles such as AUSSMOD2, Deep Glider, Cyclops1, and Haiyi were developed with pressure hulls made from advanced polymer matrix composites. Additionally, organic glass is used in Huandao Jiaolong as its pressure hull material.
The weaknesses of composites have been revealed in several manned submersibles, such as the Deepflight Challenger and Titan. The recent tragedy involving Titan serves as a reminder that the use of inhomogeneous materials in a manned cabin must be approached with great caution, considering the homogeneity of deep-water pressure.
The utilization of advanced materials has facilitated the construction of stronger and lighter pressure hulls, providing increased interior space for submersible operators. These innovative materials enable the pressure hulls to withstand higher underwater pressures, allowing for deeper diving depths. This advancement in materials not only enables deeper exploration but also enables the miniaturization of submersibles.
Regarding the design of manned cabins, designing the viewport is technically difficult. Presently, most people apply the American Society of Mechanical Engineers (ASME) PVHO-1 rule [23]. In cooperative studies of full ocean depth manned submersibles [5], Sauli Ruohonen, the chief designer of the MIR submersible, Anatoly Sagalevitch, the chief pilot of the MIR submersible, and Weicheng Cui, the chief designer of the Rainbowfish full ocean depth manned submersible, all agreed that the ASME rule is too conservative and that its thickness can be reduced. After a series of 13 models tested by Cui’s team, they found that the thickness can be reduced from 403 mm for the rule requirement to 240 mm for a full ocean depth manned cabin with a diameter of 2.1 m [24]. This approach can significantly reduce the total weight of the manned cabin. | [question]
What about composites, like what the Titan was made of, make them unsafe in deep water, and why do they design subs with new advanced materials and composites when we already know steel works fine? Also what are some of the actual materials they use?
=====================
[text]
Several new technologies have been introduced in the submersibles developed during this period compared to the submersibles developed during the first period. The technical characteristics of these submersibles can be summarized as follows: (a) Use of a solid buoyancy material. This change allows for a more compact design and eliminates the need for large gasoline tanks, contributing significantly to the miniaturization of submersibles. (b) Use of ultra-high-strength steel and lightweight metals such as maraging steel, aluminum, and titanium. These materials offer a combination of strength and lightness, enabling the construction of smaller and more maneuverable submersibles and reducing manufacturing and operating costs. As a result of these advancements, submersible technology has been significantly improved in terms of miniaturization, cost reduction, and increased production. This has led to a larger number of submersibles being built in many countries, reflecting the widespread adoption and utilization of this technology.
Pressure hull
The primary design objective of the pressure hull in submersibles is to achieve a balance between reducing the hull weight and increasing the internal volume while ensuring structural strength and stability [19]. This balance is crucial because it directly affects the payload capacity of submersible. The weight displacement ratio is a key factor influenced by the shape and materials used in constructing the pressure hull. The weight displacement ratio refers to the ratio of the weight of the submersible (including its equipment, payload, and crew) to the volume of water displaced by the submersible hull [8]. To maximize the payload capacity, submersible designers aim to minimize the weight displacement ratio. This can be achieved through careful consideration of the shape and materials used in the construction of the pressure hull. The shape of the hull should be optimized to reduce drag and improve hydrodynamics, while the materials used should be strong and lightweight. By reducing the weight displacement ratio, submersibles can provide a greater payload capacity, allowing for the inclusion of more equipment, sensors, and scientific instruments. This, in turn, enhances the submersible’s capabilities for various applications, such as scientific research, underwater exploration, and deep-sea operations.
Regarding the shape, pressure hulls in submersibles often take on conventional forms such as spherical, cylindrical, or a combination of these shapes. Spherical pressure hulls are commonly used in large depth manned submersibles such as New Alvin, Nautile, MIR I, MIR II, Shinkai 6500, Jiaolong, Shenhaiyongshi, Fendouzhe, and Limiting Factor. The spherical shape provides structural strength and evenly distributes external pressure, making it suitable for withstanding extreme depths. Additionally, some small manned submersibles used for scientific research, such as Triton, c-explorer3, and Deep Flight Dragon, are also spherically shaped for similar reasons. Cylindrical pressure hulls are relatively easy to process and manufacture, and they offer high utilization of internal space. This shape is commonly adopted by large sightseeing submersibles and small private submersibles such as Atlantis, MarkII, Aurora-5, and C-Explorer5. The cylindrical shape allows for the efficient use of space for passengers or equipment while maintaining structural integrity. The lotus-root shape represents a novel pressure hull structure consisting of a series of intersecting spherical shells [20]. This shape can consist of double, triple, quadruple, or multiple intersecting spherical shells. The AURORA-6 submersible is based on this innovative lotus-root shape, which provides increased strength and stability while maximizing the internal volume. Selecting a suitable pressure hull shape depends on various factors, including the intended purpose, depth requirements, structural considerations, and design objectives of the submersible. Each shape offers unique advantages and considerations in terms of structural strength, internal space utilization, and overall performance.
In terms of materials, special marine environments impose greater requirements on the pressure hull materials of submersibles. The materials used in submersibles can be categorized into two types: metallic and nonmetallic materials [21]. Metallic materials commonly include ultra-high strength steel, aluminum alloys, and titanium alloys [8]. For instance, the Mir1 and Mir2 submersibles were built with pressure hulls made from ultra-high strengh steel. On the other hand, underwater gliders such as Spray Glider, Seaglider, Slocum, and PETREL were built with pressure hulls made from aluminum alloys. Notable submersibles such as Nautile, Shinakai, Alvin, New Alvin, Limiting Factor, Jiaolong, Fendouzhe, and Shenhaiyongshi were all built with pressured hulls made from titanium alloys. Nonmetallic materials mainly consist of structural ceramics, advanced polymer matrix composites [22], and organic glass. In the Nereus submersible, structural ceramics were utilized as the buoyancy material. Submersibles such as AUSSMOD2, Deep Glider, Cyclops1, and Haiyi were developed with pressure hulls made from advanced polymer matrix composites. Additionally, organic glass is used in Huandao Jiaolong as its pressure hull material.
The weaknesses of composites have been revealed in several manned submersibles, such as the Deepflight Challenger and Titan. The recent tragedy involving Titan serves as a reminder that the use of inhomogeneous materials in a manned cabin must be approached with great caution, considering the homogeneity of deep-water pressure.
The utilization of advanced materials has facilitated the construction of stronger and lighter pressure hulls, providing increased interior space for submersible operators. These innovative materials enable the pressure hulls to withstand higher underwater pressures, allowing for deeper diving depths. This advancement in materials not only enables deeper exploration but also enables the miniaturization of submersibles.
Regarding the design of manned cabins, designing the viewport is technically difficult. Presently, most people apply the American Society of Mechanical Engineers (ASME) PVHO-1 rule [23]. In cooperative studies of full ocean depth manned submersibles [5], Sauli Ruohonen, the chief designer of the MIR submersible, Anatoly Sagalevitch, the chief pilot of the MIR submersible, and Weicheng Cui, the chief designer of the Rainbowfish full ocean depth manned submersible, all agreed that the ASME rule is too conservative and that its thickness can be reduced. After a series of 13 models tested by Cui’s team, they found that the thickness can be reduced from 403 mm for the rule requirement to 240 mm for a full ocean depth manned cabin with a diameter of 2.1 m [24]. This approach can significantly reduce the total weight of the manned cabin.
https://spj.science.org/doi/10.34133/olar.0036
=====================
[instruction]
Answer the question using only the information provided in the context. Do not rely on external knowledge or sources. |
Use information from the article only to explain your answer. Do not rely on outside knowledge. | What does it mean that the Dept of Labor is hiring for an intermittent employment position? | EMPLOYEE HANDBOOK
Table of Contents
Welcome................................................................................................................................................... 5
About the Agency .................................................................................................................................... 6
Mission Statement ................................................................................................................................... 6
Supersedence ........................................................................................................................................... 6
General Highlights .................................................................................................................................. 7
Access Card ............................................................................................................................................. 7
Affirmative Action/Equal Employment Opportunity Employer ....................................................... 7
Americans with Disabilities Act ............................................................................................................ 7
Appearance & Dress Code ..................................................................................................................... 7
Building Security .................................................................................................................................... 7
Code of Ethics ......................................................................................................................................... 7
Collective Bargaining ............................................................................................................................. 7
Email & Internet Use.............................................................................................................................. 8
Employee Assistance Program .............................................................................................................. 8
Employee Background Check ............................................................................................................... 8
Employment Applications ...................................................................................................................... 8
Equal Employment Opportunity........................................................................................................... 8
Immigration Law Compliance............................................................................................................... 8
On-the-Job Accident/Illness ................................................................................................................... 9
Photo Identification ................................................................................................................................ 9
Political Activity ...................................................................................................................................... 9
Rideshare ................................................................................................................................................. 9
Safety ........................................................................................................................................................ 9
Sexual Harassment ................................................................................................................................. 9
Smoking ................................................................................................................................................. 10
Standards of Conduct ........................................................................................................................... 10
Telephones - Cellular Telephones ....................................................................................................... 10
Travel ..................................................................................................................................................... 10
Uniformed Services Employment & Reemployment......................................................................... 10
Violence in the Workplace ................................................................................................................... 10
Visitors ................................................................................................................................................... 11
Weather & Emergency Closings ......................................................................................................... 11
Collective Bargaining ........................................................................................................................... 12
Bargaining Unit Representation .......................................................................................................... 12
Union Contracts .................................................................................................................................... 12
2
Grievance Procedure ............................................................................................................................ 12
Appointment and Promotion ............................................................................................................... 14
Merit System ......................................................................................................................................... 14
Job Classification .................................................................................................................................. 14
Classified & Unclassified Positions ..................................................................................................... 14
Competitive & Non-Competitive Positions ........................................................................................ 14
Scheduled & Continuous Recruitment Job Announcements ........................................................... 14
Job Announcements.............................................................................................................................. 14
Employment Opportunities ................................................................................................................. 15
Application Accommodations for People with Disabilities ............................................................... 15
Rejection from State Application ........................................................................................................ 15
Appointment Types .............................................................................................................................. 15
Working Test Period ............................................................................................................................ 16
Service Ratings ...................................................................................................................................... 17
Promotion & Reclassification .............................................................................................................. 17
Temporary Service in a Higher Class ................................................................................................. 17
Transfers ................................................................................................................................................ 18
Dual Employment ................................................................................................................................. 18
Personnel Records ................................................................................................................................ 19
Personnel Files ...................................................................................................................................... 19
Change of Personal Data ...................................................................................................................... 19
Working Hours ..................................................................................................................................... 19
Meal & Break Periods .......................................................................................................................... 20
Overtime & Compensatory Time ........................................................................................................ 20
Shift Assignments.................................................................................................................................. 20
Attendance ............................................................................................................................................. 20
Paid Leave Time ................................................................................................................................... 21
Holidays ................................................................................................................................................. 21
Sick Leave .............................................................................................................................................. 21
Vacation Leave ...................................................................................................................................... 22
Personal Leave ...................................................................................................................................... 23
Jury Duty ............................................................................................................................................... 23
Military Leave ....................................................................................................................................... 24
Leave Without Pay ............................................................................................................................... 25
Leave of Absence Without Pay (LAW) ............................................................................................... 25
Maternity Leave .................................................................................................................................... 25
3
Medical Leave ....................................................................................................................................... 25
Family Leave ......................................................................................................................................... 26
Salary ..................................................................................................................................................... 27
Payment ................................................................................................................................................. 27
Payday .................................................................................................................................................... 27
Annual Increments ............................................................................................................................... 27
Collective Bargaining & Cost-of-Living Increases ............................................................................ 27
Longevity Pay ........................................................................................................................................ 27
Deductions ............................................................................................................................................. 29
Federal Income Tax & Social Security Tax ....................................................................................... 29
Connecticut Income Tax ...................................................................................................................... 29
Health Insurance ................................................................................................................................... 29
Group Life Insurance ........................................................................................................................... 29
Supplemental Benefits .......................................................................................................................... 29
Direct Deposit ........................................................................................................................................ 30
Deferred Compensation ....................................................................................................................... 30
State Employees Campaign ................................................................................................................. 30
Union Dues ............................................................................................................................................ 30
Credit Unions ........................................................................................................................................ 30
Retirement Tiers ................................................................................................................................... 31
Separation .............................................................................................................................................. 36
Resignation ............................................................................................................................................ 33
Layoff ..................................................................................................................................................... 33
Reemployment Rights .......................................................................................................................... 33
Rescind of Resignation or Retirement ................................................................................................ 33
Exit Interview ........................................................................................................................................ 33
Retirement ............................................................................................................................................. 33
Retirement Types .................................................................................................................................. 36
Pension Payment Options .................................................................................................................... 36
Insurance Benefits ................................................................................................................................ 36
Training and Development .................................................................................................................. 38
In-Service Training ............................................................................................................................... 38
Management Development Courses .................................................................................................... 38
Tuition Reimbursement ....................................................................................................................... 38
Conferences, Workshops & Seminars ................................................................................................ 38
EMPLOYMENT POLICIES ............................................................................................................... 39
4
Welcome
Whether you have just joined the agency or have been with us for a while, we are confident that you will
or have found our organization to be a dynamic and rewarding place in which to work. We consider the
employees of the Department of Labor to be our most valuable resource and we look forward to a
productive and successful partnership.
This handbook has been prepared for you to serve as a guide for the employer-employee relationship.
The topics covered in this handbook apply to all employees of the Department of Labor. It is important
to keep the following things in mind about this handbook.
First, it contains general information and guidelines. It is not intended to be comprehensive or to address
all the possible applications of, or exceptions to, the general policies and procedures described. It is not
intended to replace or supersede collective bargaining agreements that may cover many of your terms
and conditions of employment. Employees covered by a collective bargaining agreement will receive a
copy of their contract at orientation. You should read and become familiar with your collective
bargaining agreement, this employee handbook and the agency’s employment policies. If you have any
questions concerning eligibility for a particular benefit, or the applicability of a policy or practice, you
should address your specific questions to your supervisor or contact your HR Generalist for clarification.
Second, neither this handbook nor any other agency document confers any contractual right, either
expressed or implied, to remain in the agency’s employ or guarantee any fixed terms and conditions of
your employment.
Third, the policies, procedures, and benefits described here may be modified or discontinued from time
to time. We will try to inform employees of any changes as they occur but cannot guarantee immediate
advance notice of changes.
Finally, some of the subjects described here are covered in detail elsewhere. The terms of written
insurance policies and/or plan documents are controlling for health, life, retirement and deferred or
reduced income benefits. You should refer to those documents for specific information since this
handbook is only designed as a brief guide and summary of policies and benefits.
We are pleased to have you as a member of the Department of Labor and look forward to a successful
and beneficial association.
5
About the Agency
The Department of Labor handles far more than unemployment insurance benefits. Helping employers
and jobseekers with their workforce needs is our goal. An overview of the many programs and public
services the agency offers is available on the website (www.ct.gov/dol), which also contains information
ranging from upcoming job fairs to wage and workplace guidelines.
Mission Statement
The Department of Labor is committed to protecting and promoting the interests of Connecticut workers.
In order to accomplish this in an ever-changing environment, we assist workers and employers to become
competitive in the global economy. We take a comprehensive approach to meeting the needs of workers
and employers, and the other agencies that serve them. We ensure the supply of high-quality integrated
services that serve the needs of our customer.
Supersedence
This revised version of the Employee Handbook supersedes all prior versions that have been issued by
the Department of Labor and will be effective April 2023.
6
General Highlights
Access Card
Central Office and Annex employees are issued an access card to enter the building. Should your card
be lost, stolen or destroyed, contact Facilities Operations so the card can be deactivated and a
replacement issued.
Affirmative Action/Equal Employment Opportunity Employer
The Department of Labor is committed to affirmative action/equal employment that will build on the
strengths of our current workforce and continually enhance the diversity of our organization. The
department opposes all forms of discrimination and has developed a set of anti-discriminatory policies.
Please direct your questions about affirmative action issues to the AA/EEO Manager at Central Office,
200 Folly Brook Boulevard, Wethersfield, CT 06109; telephone (860) 263-6520. To file a complaint,
please click on the link to access the form: Internal Discrimination Complaint
Americans with Disabilities Act
The Department of Labor complies with all relevant and applicable provisions of the Americans with
Disabilities Act (ADA). The agency will not discriminate against any qualified employee or job
applicant with respect to any terms, privileges, or conditions of employment because of a person’s
physical or mental disability. See the Americans with Disabilities Act Reasonable Accommodation
Policy
Appearance & Dress Code
It is the policy of the agency to project a business-like image to clients, visitors and co-workers. In line
with this, you are required to dress appropriately in clothing which is suitable for your job responsibilities
and work environment, meets the requirements established for safety reasons, and complies with the
agency’s dress code requirements. See Professional Image Policy.
Building Security
Each and every employee must follow the building security rules and regulations. Employees are not
allowed on the property after hours without prior authorization from their supervisor.
Code of Ethics
The department’s standards of ethical conduct, which all employees are expected to be familiar with and
observe, are outlined in the Code of Ethics for Public Officials & State Employees and the Ethical
Conduct Policy .
Collective Bargaining
Your assignment to a collective bargaining unit (union) is based on your job classification. As a
bargaining unit member, you will have union dues deducted from your bi-weekly paycheck. You may
elect not to join a union. Your union contract governs salary, benefits and hours of work, and other
terms and conditions of employment. Collective bargaining agreements are negotiated periodically.
7
Exempt employees are excluded from the collective bargaining process and are not required to pay union
dues.
Email & Internet Use
It is the policy of the agency to provide electronic mail (email) and internet access for work-related
purposes. You are required to adhere to this and related policies to ensure proper, legal and effective
use of these electronic tools and resources. See Acceptable Use of State Systems Policy.
Employee Assistance Program
The Employee Assistance Program (EAP) is designed to offer consultation and counseling services for
employees and their dependents who are experiencing problems which may be impacting their life at
work and/or at home. Some of these problems may include family, marital, alcohol/drugs, emotional
distress, and job-related, legal, or financial difficulties. Participation is voluntary and confidential. EAP
services are provided by Wheeler EAP. To schedule an appointment or obtain more information, call 1800-252-4555 or 1-800-225-2527, or log on to their website at Wheeler EAP.
Employee Background Check
Prior to making an offer of employment, Human Resources may conduct a job-related background check.
A comprehensive background check may consist of prior employment verification, professional
reference check, education confirmation and fingerprinting.
Employment Applications
We rely upon the accuracy of information contained in an employment application and the accuracy of
other data presented throughout the hiring process and employment. Any misrepresentation, falsification
or material omission of information or data may result in exclusion of the individual from consideration
for employment or, if the person has been hired, termination of employment.
Equal Employment Opportunity
The Department of Labor is an equal employment opportunity employer. Employment decisions are
based on merit and business needs. The Department of Labor does not discriminate on the basis of race,
color, citizenship status, national origin, ancestry, gender, sexual orientation, age, religion, creed,
physical or mental disability, marital status, veterans’ status, political affiliation, or any other factor
protected by law. To file a complaint, please click on the link to access the form: Internal Discrimination
Complaint.
Immigration Law Compliance
All offers of employment are contingent on verification of the candidate’s right to work in the United
States. On the first day of work, every new employee will be asked to provide original documents
verifying his or her right to work and, as required by federal law, to complete and sign an Employment
Eligibility Verification Form I-9.
8
On-the-Job Accident/Illness
The agency promotes safety in the workplace. The State of Connecticut also has implemented a
Managed Care Program for Workers’ Compensation, administered by Gallagher Bassett Services, Inc.
You must report a work-related accident or illness to your supervisor, who is required to call a 24-hour
hotline (1-800-828-2717) to report your accident or illness and initiate a claim. If your supervisor is
unavailable, you may call or have someone call for you. Your supervisor must also complete the First
Report of Injury (Form WC-207) and submit it to [email protected] or by fax to 959-200-4841,
whether or not you seek treatment or lose time from work. To become eligible for workers’
compensation benefits, you must seek treatment from a network physician or medical facility. Forms
can be obtained at Workers' Compensation Rights, Responsibilities, and Claims--Documents (ct.gov).
In cases of a medical emergency call 911 to seek immediate medical attention. Contact the DAS Workers'
Compensation Division at (860) 713-5002 with any questions regarding access.
Photo Identification
You are required to wear and visibly display a photo identification badge during working hours. If your
identification badge is lost, stolen, or destroyed, or you have transferred to a different unit, you must
request a replacement through Facilities Operations.
Political Activity
As a state employee, state statutes govern your involvement in various political activities such as
campaigning and running for elective office. Also, if you are working on programs financed in whole
or in part by federal funds, you are subject to the provisions of the federal Hatch Act, which is generally
more restrictive than state statue. The purpose of these laws is to avoid a conflict of interest between
your state job and political activities. Information regarding political activity may be found in DAS
General Letter 214D, link to document General Letter 214D – Political Activity. The Ethical Conduct
Policy also addressed these issues and you are advised to contact the agency’s Ethics Liaison regarding
any political activity. See Ethical Conduct Policy.
Rideshare
The department promotes the statewide Rideshare Program, an opportunity to reduce your transportation
expenses to work. Consider using a ride-sharing mode (carpool, vanpool or bus) as an alternative to
driving alone. Ride sharing saves you money, energy and preserves the environment. For information
call 800-972-EASY (800-972-3279) or visit the website at www.rideshare.com.
Safety
The safety and health of employees is our top priority. The agency makes every effort to comply with
all federal and state workplace safety requirements. Each employee is expected to obey safety rules and
exercise caution and common sense in all work activities. Promptly report safety concerns to your
supervisor.
Sexual Harassment
The Department of Labor does not tolerate sexual harassment. Sexual harassment may include
unwelcome sexual advances, requests for sexual favors, or other unwelcome verbal or physical contact
9
of a sexual nature when such conduct creates an offensive, hostile and intimidating work environment
and prevents an individual from effectively performing the duties of their position. See Sexual
Harassment Policy.
Smoking
Smoking is prohibited throughout agency buildings and offices, including in rest rooms, private offices,
lounges and similar areas. Smoking is permitted only in designated areas outside office buildings and
other work locations. The use of smokeless tobacco and e-cigarettes are also prohibited and subject to
the same restrictions.
Standards of Conduct
The work rules and standards of conduct for employees are important and the agency regards them
seriously. All employees are urged to become familiar with and must follow these rules and standards.
See Employee Conduct Policy.
Telephones - Cellular Telephones
The agency recognizes that occasionally it is necessary for employees to make or receive personal
telephone calls during working hours. You are expected to restrict your personal telephone usage, both
on state-owned phones and personally owned cellular phones, to reasonable, incidental calls that do not
interfere with your work schedule or the performance of your duties. To avoid being disruptive to others
in the workplace, please make certain audible alerts are disabled.
Travel
Your position may require travel to conduct state business. If you are required to travel for work, you
may obtain a state-owned vehicle from a central carpool with a valid driver’s license. Use of your
personal vehicle in the performance of Agency duties is allowable only when the use of a State-owned
vehicle is not reasonably available for use and request mileage reimbursement. You must present proof
of automobile insurance with the minimum coverage requirements. Contact your supervisor or Business
Management if you have any questions.
Uniformed Services Employment & Reemployment
As an equal opportunity employer, the Department of Labor is committed to providing employment and
reemployment services and support as set forth in the Uniformed Services and Reemployment Rights
Act of 1994 (USERRA).
Violence in the Workplace
The Department of Labor has a policy prohibiting workplace violence. Consistent with this policy, acts
or threats of physical violence, including intimidation, harassment and/or coercion, which involve or
affect the organization and its employees will not be tolerated. See Violence in the Workplace
Prevention Policy.
10
Visitors
To provide for safety and security, only authorized visitors are allowed in the workplace. All visitors
must enter through the main reception area, sign-in and sign-out at the front desk and receive a visitor
identification to wear while on the premises. Authorized visitors will be escorted to their destination
and must be accompanied by an employee at all times.
Weather & Emergency Closings
At times, emergencies such as severe weather or power failures can disrupt business operations.
Everbridge, is a system that the state utilizes to notify enrolled individuals on safety and weather
concerns. You can determine by which methods you want to be notified. Sign-up is free. Any personal
information provided (such as cell number) will be used for important employee notifications purposes
only directed by DAS. Everbridge will never give or sell contact or location information to any vendor
or other organization.
The Department of Emergency Service & Public Protection website is the official source of information
for state employees. Use this page to find any official announcements about closures or delayed
openings that have been declared by the Governor.
Everbridge system can send alerts to your work phone and email as well as your home phone, cell phone,
and home email.
The Statewide CT Alert system can also keep you informed of state emergencies and send you emails
and text alerts.
FEMA’s Ready.gov preparedness site has information on how to keep safe during the winter.
11
Collective Bargaining
Bargaining Unit Representation
Labor unions and management at times negotiate collective bargaining agreements (union contracts).
The contracts govern such areas as salary, benefits, hours of work, and the terms and conditions of
employment. Most state job classifications have been assigned to particular bargaining units (unions)
and state employees have voted to have unions represent them in the negotiation process.
If you are a nonexempt employee, you have been assigned to a bargaining unit based on your job
classification and will be represented by that specific union. If you are an exempt employee, you have
been excluded from the collective bargaining process. The terms and conditions of your employment
will be governed by state statutes, rules and regulations.
Union Contracts
Union contracts, established through the formal negotiation process, outline the terms and conditions of
your employment. You should familiarize yourself with your contract. Benefits and provisions vary
between bargaining units. Contract language has been crafted to avoid disputes and eliminate
misunderstandings. Contract provisions, however, may be open to interpretation and subject to the
grievance and arbitration process. Direct your questions about your union contract to your supervisor,
union representative or Human Resources Generalist.
Grievance Procedure
Your problems or complaints should be resolved quickly and fairly. First, discuss the issue with your
supervisor, who may help you find a solution. If your supervisor or another employee in the chain of
command cannot resolve your problem or complaint, or if you feel that you have been treated unjustly,
contact your union steward or Agency Labor Relations Specialist. If an issue cannot be resolved
informally, you may follow the grievance procedure outlined in your union contract. This procedure
helps resolve disputes concerning the interpretation and application of a contract. You should, however,
make every effort to resolve an issue before filing a grievance. Though specific procedures may vary,
your union contract establishes time limits for initiating grievances and obtaining responses.
The first steps of the grievance process are informal to encourage quick resolution. If an issue still cannot
be resolved, more formal meetings are conducted until the grievance reaches the highest level of the
process. Most grievance procedures permit arbitration when an issue cannot be resolved at the highest
level. An arbitrator, an impartial party chosen by the union and management, will hear both sides of an
issue and render a binding decision.
A union normally requests arbitration, but you as an employee may also request it in certain
circumstances. Arbitration is permitted only if negotiated as a step in the grievance procedure.
You or a group of employees may present a grievance to management for resolution without your union’s
participation. However, the resolution must be consistent with your union contract and your union must
be given the opportunity to attend all meetings.
12
If you are an exempt classified employee, you may appeal certain actions through the grievance
procedure as outlined in Sec. 5-202 of the Connecticut General Statutes.
13
Appointment and Promotion
Merit System
The appointment and promotion of state employees is based on the merit principles in the State Personnel
Act. As with other federal, state and municipal merit systems, this system was established to minimize
the influence of electoral politics on the employment and retention of state employees. The system
strives to place the best qualified people in state service and to ensure that they are fairly treated in the
appointment and promotion process. The merit system is not subject to collective bargaining.
Job Classification
The state, as an employer of thousands of people, must systematically describe and group jobs to ensure
consistent and fair treatment when assigning, compensating and promoting employees. Consequently,
it has established a classification plan for all jobs in the executive branch of state service. Individual
positions are grouped into job classes, with each class consisting of positions with similar duties,
responsibilities and required qualifications. Your job classification is the foundation for the employment
process.
Classified & Unclassified Positions
Most positions in the executive branch of state government are classified. Unclassified positions may
be exempt from job announcements. The State Personnel Act lists a number of unclassified categories:
agency heads, members of boards and commissions, officers appointed by the governor, deputies and
executive assistants to the head of departments, executive secretaries, employees in the Senior Executive
Service and professional specialists.
Competitive & Non-Competitive Positions
Most classified positions are competitive and require an application. The type of experience required
depends on the job classification. Applicants must meet minimum general experience and training
requirements, however, to be eligible for appointment if a position requires a professional license or
degree, there may be no additional requirements beyond possession of the professional license or degree.
Scheduled & Continuous Recruitment Job Announcements
Most state job opportunities are announced to the general public with a specific closing date. If you
apply for a job opening, you will be notified if you are selected for an interview by the hiring agency.
When the state considers continuous recruiting necessary, it may postpone the closing date for filing
applications until it receives a suitable number of candidates. A job posting will indicate when recruiting
is continuous and that applications may be filed until further notice.
Job Announcements
To meet merit system objectives, the state has developed competitive job classifications to fill many of
its positions. They are not used to fill unclassified positions or those in classes designated as noncompetitive. State job announcements fall into the following categories:
14
Open to the Public. If you meet the minimum experience and training qualifications for a position, you
may participate in this type of recruitment. Open-competitive job announcements are administered
periodically usually when a state agency is recruiting for a vacant position.
Statewide & Agency Promotion. If you are a state employee who meets the minimum experience and
training qualifications for a position and has completed six months continuous service in a state agency,
you may participate in a statewide recruitment. Agency promotional announcements will have the
additional requirements that you must be a current agency employee.
Employment Opportunities
Agency job announcements are posted on the DAS Online Employment Center. You should check
regularly for the most up to date information.
To apply for employment, you must complete a Master Application on the DAS Website. Check the
state employment pages on the Department of Administrative Services website (Job Openings Department of Administrative Services (jobapscloud.com) for information about completing the
application form, job opportunities, and to sign up for e-mail notification of current job openings.
Application Accommodations for People with Disabilities
The state may conduct recruitments in various ways. If you need special accommodations for a particular
recruitment, you or someone on your behalf should immediately notify the DAS at (860) 713-7463. You
must supply the application title and job number, and a description of your special needs and
documentation of the disability.
Rejection from State Application
Your application for a state job opening may be rejected if (1) your application was received after the
closing date, (2) you did not meet the minimum requirements, (3) your years of experience did not match
the requirements, (4) specific information was missing from your application, (5) you failed to meet the
special requirements for the position, or (6) your years of experience did not match the special
requirements.
Appointment Types
Durational. An employee hired for a specific term, for a reason not provided above, including a grant
or specially funded program, not to exceed one year. A durational employee shall become permanent
after six months, or the length of the working test period, whichever is longer.
Emergency. The state may appoint you to an emergency position to meet short-term agency needs. The
appointment may extend for as long as two months but may not be renewed in a fiscal year.
Intermittent. Intermittent employment is also work on an "as needed" basis. The agency may use
intermittent interviewers to supplement permanent staff in times of high unemployment. They are paid
an hourly rate for time worked and may receive benefits. They are eligible to apply for agency
promotional postings following the completion of 1044 hours of intermittent service.
15
Permanent. The state may appoint you to a permanent competitive position from a certification list.
You must successfully complete the working test period to gain permanent status.
Provisional. The state may provisionally appoint you to a position that must be filled immediately if no
active certification list exists, or an insufficient number of candidates are listed. The appointment may
extend for as long as six months or until a job announcement for the position has been held and a
certification list promulgated. You may not receive more than one provisional appointment in a fiscal
year or serve more than six months as a provisional appointee. Your job performance while a provisional
must be satisfactory. To receive a permanent appointment, you must be appointed from a competitive
process for the position. If you are not appointed from a competitive process and do not have a
permanent position to which you may return, you must be separated from state service. If the competitive
process is not completed for a position within six months, an additional temporary or emergency
appointment may be authorized.
Seasonal. Seasonal employment for a position established for a specific period, usually during summer
months. Individuals employed are paid an hourly rate and are not entitled to any fringe benefits.
Temporary. Position filled for a short term, seasonal, or an emergency situation, including to cover for
a permanent position when the incumbent is on workers’ compensation or other extended leave, not to
exceed 6 months. May be extended up to one year. If a temporary employee is retained greater than 12
months, said employee shall be considered durational.
Working Test Period
The working test period, or probationary period, for a state employee is an extension of the state
recruitment process. You must serve this period to gain permanent status following initial appointment
or promotion. Your initial test period is generally six months, depending on the applicable contract or
state regulation. Your promotional test period is generally four to six months, again depending on the
applicable contract or regulation. Exceptions may occur in the length of the trial period for trainee
positions. Questions about your working test period may be directed to your supervisor or Human
Resources Generalist.
During an initial working test period, you are considered a probationary employee and will work closely
with supervisors and colleagues to learn your duties. This period also gives your supervisor the
opportunity to evaluate your response to training and job requirements. If you demonstrate acceptable
performance during your initial test period, you will be given a satisfactory service rating and gain
permanent status as a state employee.
Your working test period may be extended in certain circumstances. If you do not meet acceptable
performance standards during the initial working test period, you will be separated from state service.
You may not appeal a dismissal during your initial test period through the contractual grievance
procedure, but you may request an administrative review. If you fail to meet acceptable performance
standards during a promotional working test period, you will revert to your previous classification.
16
Service Ratings
You will receive a service rating for your initial working test period or promotional test period, and at
least three months before your annual increase date. Depending on your union contract or state statutes,
you may receive a service rating at any time, particularly when your job performance has changed
significantly.
Service ratings record your progress and performance as training and job experience increase. The state
recognizes satisfactory performance by awarding annual salary increases (as negotiated) until reaching
the maximum step in a salary group. For employees at the maximum step, some bargaining units award
a lump sum payment in lieu of an annual increment. A “less than good” rating may prevent you from
receiving an increase. An “unsatisfactory” during the working test period signifies failure. After
attaining permanent status, two successive “unsatisfactory” ratings may result in your dismissal.
Managers are evaluated in accordance with the provisions of the Performance Assessment and
Recognition System (PARS) Program.
Promotion & Reclassification
Generally, there are two ways in which you may receive an appointment to a higher-level job
classification. First, you may compete for a new position or an opening that arises when another
employee leaves an existing position. The agency may use a formal state employment application
process to obtain a list of candidates to be considered for an opening or it may use a less formal
recruitment and selection process. In either event, in order to be considered you must meet the minimum
qualifications for the higher classification and comply with the application procedures. Recruitment
notices are posted internally on the agency intranet, and at times externally on the Department of
Administrative Services website. It is your responsibility to monitor them and respond according to the
instructions on the job posting.
Additionally, you may progress to a higher level through reclassification. After working for the agency
for some time, you may find that your duties have expanded and are more consistent with a higher-level
job classification. In such cases, your supervisor will ask you to complete a job duties questionnaire,
which will be evaluated by Human Resources. If you are found to be working “out of class,” the agency
has the option of either removing the higher-level duties or reclassifying your position to the higher level.
Certain conditions must be met for reclassification. You must be in your current position for at least six
months, have a rating of “good” or better on your last two performance evaluations and meet the
minimum experience and training requirements for the higher class. If you have applied for a job
opening and did not qualify for the classification, this is evidence that you do not meet the qualifications
for the higher-level class and cannot be considered for reclassification.
Temporary Service in a Higher Class
When a temporary vacancy occurs in a non-entry level classification, such as the result of an employee
being on an extended leave of absence, the agency may fill the opening by temporarily assigning you to
a higher level as long as the assignment lasts for more than 30 days and meets any other relevant union
contract provisions. You must meet the minimum qualifications of the class. While serving in this type
17
of service, you are paid at the higher level, but you retain status in your permanent (lower) classification.
Benefits such as longevity and vacation accrual are based on the permanent class.
Transfers
You may voluntarily transfer within the agency or to another state agency. To place your name on a
Statewide Transfer list, for your current job class in which you hold permanent status, please visit the
DAS Website, Freenames - Department of Administrative Services (jobapscloud.com), scroll down and
follow the process of Statewide Transfers. If your job classification is unique to the agency, your transfer
options will be limited to those classes deemed comparable to the one in which you have permanent
status. Consult your union contract for more information.
If you are interested in transferring to another work location within the agency and meet the eligibility
of the job requirements, Human Resources will send emails periodically with transfer opportunities, to
be considered you must follow the procedures noted on the email.
The agency may involuntarily transfer you under certain circumstances, generally defined in your union
contract or state personnel regulations. Transfers occur for a variety of reasons: when the agency seeks
to better use its resources, to avoid layoffs, to meet emergency or seasonal conditions, or to accommodate
you.
If you are an exempt employee, your transfer is subject to state regulations and the State Personnel Act.
Dual Employment
You may be authorized to work at a secondary agency subject to the dual employment provisions of the
regulations for state agencies. For this to occur, the secondary agency must initiate and complete the
appropriate paperwork. The secondary agency will forward a copy of the dual employment request form
to the primary agency for completion and return. If all provisions are met, subject to any fair labor
standards considerations and the operating needs of the department, you may be eligible for secondary
employment. Secondary employment may not pose a conflict of interest or interfere with the
performance of your job duties and your approved work schedule for the Department of Labor.
18
Personnel Records
Personnel Files
The agency maintains a digital personnel file containing information about your employment: service
ratings; personnel processing forms; appointment, promotion, and disciplinary letters. The agency also
maintains a separate, confidential file that contains your medical documents, including doctor’s notes
and medical certificates.
You may review your digital personnel file by contacting Human Resources. You may sign a waiver to
allow another person, such as a union official, to review your files. The agency must comply with written
requests for information about its employees under the state freedom-of-information law. If the agency
considers an information request to be a possible invasion of your privacy, you will be notified.
Change of Personal Data
Whenever you change your name, address, number of dependents, telephone number, or marital status,
you must promptly notify Payroll so that agency records and files may be updated. You may also need
to complete a new federal or state withholding allowance certificate (W-4 or CT W-4), or various health
insurance forms.
Working Hours
The negotiated workweek for most staff members currently averages 40 hours per week. Some union
contracts provide for a 35 or 37.5-hour workweek. Many employees work a standard schedule of 8:00
a.m. to 4:30 p.m. The agency has also established nonstandard work schedules, which are approved in
advance by the appointing authority in consultation with the Director of Human Resources. Provision
for flex time has been included in some contracts. If your position is covered by flex time or other
nonstandard workweek, your supervisor will explain its operation. The Payroll Unit will maintain your
attendance record.
From time to time and consistent with the terms of the applicable collective bargaining agreement, it
may be necessary to temporarily or permanently change your work schedule to meet operational needs.
In such a situation you will be given as much notice as possible, at a minimum that is required by your
union contract. Regardless of your work schedule, you are expected to arrive at work on time, return
from lunch and breaks on time, and not leave your job prior to quitting time.
19
Meal & Break Periods
Full-time employees are permitted two 15-minute breaks and a 30-minute unpaid meal period. Longer
unpaid meal periods are allowed with supervisory approval. The schedule for all meal and break periods
is determined by your supervisor based on business operations and staffing needs. Your supervisor will
inform you of your schedule and any required changes.
Employees are not permitted to work through lunch to leave early. Breaks do not accumulate, nor may
they be used to start late or leave early.
Overtime & Compensatory Time
Overtime occurs when you work in excess of your regular established weekly schedule. Overtime
assignments must be approved in advance, except in extreme emergencies. The Fair Labor Standards
Act (FLSA), state statutes and regulations, and your union contract govern your eligibility for overtime
and the rate of compensation. Compensatory time is a form of accrued leave time that may be used later;
it does not constitute a basis for additional compensation. Compensatory time must be taken in
accordance with the provisions of your contract and agency policy.
The FLSA may conflict with your union contract regarding compensation for overtime. Generally, you
will be paid by the method that provides the greater benefit. Hours worked in excess of 40 in one week
are generally compensated at the rate of time-and-one half. The time-and-one-half rate is derived from
your basic hourly wage rate. Some employees may be ineligible for the overtime provisions of FLSA.
Questions may be directed to Payroll.
Shift Assignments
Some areas engage in multi-shift operations. Depending on the starting and ending times of your shift
and union contract, you may be eligible for shift-differential payments. These usually take the form of
additional pay for the hours worked on your assigned shift. Generally, any shift that begins before 6:00
a.m. or after 2:00 p.m. is subject to shift-differential payments. Some employees may not be eligible for
these payments, even when assigned to such a shift. Consult your union contract for information
regarding eligibility for the shift and weekend differentials, and the applicable pay rate.
Attendance
You are responsible for maintaining a good attendance record. Frequent absenteeism reduces the level
of your service to the agency and the public, increases operational costs, and places a burden on your
co-workers. Use your accrued leave in accordance with agency policies and procedures and ensure that
you comply with Employee Dependability Policy requirements. You should request leave time as far in
advance as possible. Refer to your union contract for additional guidelines. Agency operating needs,
the reasonableness of the request, and the specific language contained in the union contract govern the
approval or denial of your leave request. Whenever possible, avoid unscheduled leave.
20
Paid Leave Time
Holidays
The state grants 13 paid holidays per year to permanent, full-time employees: New Year’s Day, Martin
Luther King’s Birthday, Lincoln’s Birthday, Washington’s Birthday, Good Friday, Memorial Day,
Juneteenth Day, Independence Day, Labor Day, Columbus Day, Veterans’ Day, Thanksgiving Day and
Christmas Day. Intermittent and durational employees must work the equivalent of six months (1044
hours) to be eligible for holiday pay.
If a holiday falls on a Saturday or Sunday, the state generally designates the Friday preceding or the
Monday following as the day it will be observed. A calendar detailing the exact day of holiday
observance appears on the Human Resources intranet site.
You will be paid for a holiday if you are on the payroll on or immediately before or after the day it is
celebrated; you normally will not receive holiday pay if on a leave of absence without pay before and
after a scheduled holiday. Consult your union contract for information about compensation for work
performed on a state holiday.
Sick Leave
As a permanent employee, you accrue sick leave from your date of employment for each fully completed
calendar month of service, except as otherwise provided in the statutes. You must use sick leave when
incapacitated or in the special cases described in your union contract. Upon exhaustion of sick leave,
you must use other accrued leave in lieu of sick leave unless FMLA rules dictate otherwise. If an
employee is sick while on annual vacation leave, the time will be charged against accrued sick leave if
supported by a properly completed medical certificate.
Sick leave is not an extension of vacation or personal leave. You should maintain a sick leave balance
as a form of insurance in the event of a long-term illness.
Accrual. Full-time employees accrue paid sick leave at the rate of 1¼ days per completed month of
service or 15 days per year. If you are absent without pay for more than forty hours in any month, you
do not accrue sick leave in that month. If you are an eligible part-time employee, you accrue paid sick
leave on a pro-rated basis or on the amount of your scheduled hours as a percentage of a full-time
schedule.
Balances. Payroll records your sick leave balance (time accrued but not used) in hours and minutes.
When you retire, the state will compensate you for 25 percent of your accrued sick leave balance (to a
maximum of 60 days).
Call-In Procedure. If you are unexpectedly absent as a result of injury or illness, you must notify your
supervisor or designee as early as possible, but no later than one-half hour before your scheduled
reporting time. If your absence is continuous or lengthy and you have not been granted a medical leave
21
of absence, you must notify your supervisor on a daily basis. If you fail to call in, you may be placed on
unauthorized leave without pay and subject to corrective action.
Medical Documentation. Your physician must complete a medical certificate if you are absent as the
result of injury or illness for more than five working days or as otherwise outlined in your union contract
or state personnel regulations. If you fail to provide the required medical documentation, you may be
placed on unauthorized leave, which can lead to loss of pay and disciplinary action. Medical certification
forms should be emailed directly to [email protected]. Any questions must be sent
directly to [email protected].
Additional Use of Sick Leave. You may use sick leave for situations other than your own injury or
illness (a medical certificate or written statement supporting a request may be required):
•
•
•
•
•
Medical, dental or optical examination or treatment when arrangements cannot be made outside
working hours.
Death in your immediate family.
Illness or injury to a member of your immediate family.
Funeral for a person other than an immediate family member.
Birth, adoption or taking custody of a child.
To determine the exact number of days allowed, refer to your union contract.
Extended Illness or Recuperation. If you exhaust your accrued sick leave during a prolonged illness
or injury, you may be permitted to use other accrued time. You must obtain approval from your
immediate supervisor for use of other accrued leave to cover the remainder of the absence. In certain
circumstances, you may be granted an advance of sick leave if you have at least five years of full-time
state service. Consult your union contract for information regarding the sick leave bank or donation of
leave time.
If an employee has no accrued leave time available, a written request for a medical leave without pay
must be submitted to [email protected], and the request must be followed up in
writing upon return to work. Failure to do so will result in charging the absence to Sick Leave Without
Pay.
Illness or Injury While on Vacation. If you become ill or injured while on vacation, you may request
that the recovery time be charged to your sick leave rather than to your vacation leave. A medical
certificate or documentation support your request will be required.
Vacation Leave
Usage. As a full-time employee, you may begin taking paid vacation leave after six months of
continuous service. Unless otherwise stated in a union contract, a part-time employee may begin taking
paid vacation after completing the equivalent of six months of full-time service (1044 hours). Requests
for vacation leave are subject to the approval of your supervisor, based on the operating needs of the unit
and the seniority provisions of your contract.
22
Accrual. You accrue vacation leave at the end of each full calendar month of service. Absence without
pay for more than five days (equivalent to 40 hours) in a month result in the loss of accrual for that
month. You accrue vacation leave at the following rate for each completed month of service (prorated,
if part-time):
•
•
•
0-5 years of service: 1 day per month (12 days per year).
5-20 years: 1-1/4 days per month (15 days per year).
20 or more years: 1-2/3 days per month (20 days per year)
As a manager and confidential employees excluded from collective bargaining, you accrue vacation
leave at the rate of 1-1/4 days per completed month of service or 15 days per year. After completing 10
years of service, on January 1 of each subsequent year you will receive the following number of days in
addition to the normal accrual:
•
•
•
•
•
11 years of service: 1 additional day
12 years: 2 additional days
13 years: 3 additional days
14 years: 4 additional days
15 or more years: 5 additional days
Balances. Payroll will record your vacation leave balance in hours and minutes. Without agency
permission, you cannot carry more than 10 days of accrued vacation leave from one year to the next if
you are a nonexempt employee. If you are a nonexempt employee, refer to your bargaining union
contract regarding your maximum accrual. If you are a nonexempt employee or a manager, you may
accumulate as many as 120 days of vacation time. When separated from state service, if a permanent
employee, you will receive a lump-sum payment for your vacation leave balance.
Personal Leave
As a full-time employee who has attained permanent status, you are credited with three days of personal
leave to conduct private affairs, including the observance of religious holidays. On January 1 of each
year thereafter, three days of personal leave will be credited to your leave balance. You must request
authorization in advance from your supervisor to use personal leave. Personal leave must be used prior
to the end of the calendar year or it will be forfeited. You are responsible for monitoring your time
charges to ensure that your personal leave is used within the calendar year.
Part-time employees generally are entitled to prorated personal leave; consult your union contract for
the specifics. Payroll will maintain your balance.
Jury Duty
If you are summoned for jury duty, you will not lose your regular salary or benefits. You must notify
your supervisor immediately and supply the jury notice; your supervisor will forward it along with the
reason for your absence to the Payroll Unit. The court will supply you with verification of your
attendance; which is then submitted through your supervisor to Payroll. You must return to work
23
whenever not actively serving on jury duty. With the exception of travel allowances, you must return
the money received for jury duty to Payroll.
Military Leave
If you are a member of the National Guard or a reserve component of the U.S. armed forces and a
permanent employee, you may apply for leave to attend required training. To verify the leave, you must
submit a copy of your military orders to [email protected] or fax to 860-622-4928.
The state permits as many as three weeks in a calendar year for field training. Paid leave for military
call-ups other than annual training is limited to unscheduled emergencies, subject to the provisions of
your union contract. Notify your supervisor as soon as you become aware of your military leave
schedule.
24
Leave Without Pay
Leave of Absence Without Pay (LAW)
Depending on the terms of your union contract, you may be granted a LAW without endangering your
status as a state employee. Your benefits, however, may be affected. You will not accrue vacation or
sick leave in any month on a LAW for more than five working days (hourly equivalent of) without pay,
and service credit toward retirement, seniority and longevity may be suspended. If you are on a LAW
for pregnancy, illness, injury, or an FMLA-qualifying reason, the state will continue to pay the same
portion of your health insurance as while you were working. You will, however, be billed directly for
the amount that you previously paid through payroll deduction. If on a LAW for another reason, you
will be billed for the full cost of medical coverage.
If possible, submit your LAW request to [email protected] in advance and in
writing with appropriate documentation. Your manager may grant a LAW for as many as five
consecutive days. A LAW of longer than five days must be authorized by the Benefits and Leaves Pod
before the leave, except in extraordinary situations such as emergency medical leave. You may be
granted a LAW for a variety of purposes on a position-held or not-held basis. Your LAW must be
consistent with the requirements in your union contract or state regulations if you are an exempt
employee. If your position is held, you may resume employment on the expiration of the LAW. You
must be cleared by a physician to return to normal duties if you are on a medical LAW. This needs to
be done before you return to work. If your position is not held, your return to active service depends on
the availability of a position. The agency will consider the reason for your request, your work record
and agency operating needs when deciding whether to grant you a LAW and to hold your position.
Maternity Leave
If pregnant, you must use accrued sick leave to cover time before, during or after your delivery when a
physician certifies you as “unable to perform the requirements of your job.” You must send a Medical
Certificate - P33A to [email protected] to substantiate your disability. When your
disability period ends or you have exhausted your sick leave balance prior to the end of your disability
period, you may request to use accrued vacation and personal leave. When all your paid leave has been
used, you may request a LAW with your position held. Refer to your union contract and the FMLA
Policy for further information.
Medical Leave
You must use accrued sick leave to cover the time which you are unable to work because of illness. If
that period extends beyond five days, you will need to supply a Medical Certificate - P33A to
[email protected] to substantiate your use of sick time to. When your sick leave
balance is exhausted, you must apply vacation or personal leave to cover your absence unless FMLA
rules dictate otherwise. Your union contract may contain provisions for advance of sick leave, a sick
leave bank, and donation of leave time in cases of prolonged illness. You may also request a leave of
absence without pay. Details on the requirements and provisions of such leaves are in your union
contract and the FMLA policy.
25
Family Leave
You may request a LAW for the birth or adoption of a child; the serious illness or health condition of a
child, spouse or parent; your own serious health condition; the placement of a foster child in your care
and certain other conditions.
A medical certificate must be submitted by email to
[email protected] to substantiate a request for leave under the Family and Medical
Leave Act (FMLA).
You must request forms by sending an email to
[email protected].
26
SALARY
Payment
Your job classification determines your salary grade. Classifications are assigned to a salary group based
on the amount and type of required experience and training, technical complexity, difficulty and level of
responsibility. The state establishes a number of steps for salary groups other than managerial and
confidential classes. As a new employee, you will generally start at the salary range minimum for your
job classification.
Payday
The state issues salary payments bi-weekly through a checkless system called e-pay. You will receive
payment for the work you performed during the previous two weeks. The delay allows for processing.
If you are a new employee, you should receive your first salary payment four weeks after your first
workday. If you separate from state service, you will receive your last salary payment two weeks
following the end of the last pay period worked. Earnings, itemized deductions and leave accruals are
viewable online. Questions should be directed to Payroll.
Annual Increments
Annual increments are based on the terms of your union contract. You may be raised to the next higher
step in a salary group on your anniversary date. Consult your union contract for details. If an appointed
official or manager, you may be awarded an increase by the governor, usually effective on January 1.
The amount of the increase will be based on your goal attainment and performance under PARS, the
Performance Appraisal and Recognition System for managers.
Collective Bargaining & Cost-of-Living Increases
If you are a union member, your increase will result from the collective bargaining process. An increase
generally will be calculated as an across-the-board percentage within a negotiated salary structure and
payable in July. If you are an appointed official or a manager, the governor may award you a cost-ofliving increase, usually a percentage of your annual salary, also payable in July. When promoted, you
will normally receive a salary increase of at least one full step in the salary group, unless you are placed
at the maximum step. If promoted to a managerial position, you will receive an increase of five percent
or the minimum of the new salary range, whichever is greater.
Longevity Pay
Employees hired on or after July 1, 2011, shall not be entitled to a longevity payment however, any
individual hired on or after said date who shall have military service which would count toward longevity
under current rules shall be entitled to longevity if they obtain the requisite service in the future.
Employees hired prior to July 1, 2011, are eligible for longevity. For those eligible employees, when
you complete the equivalent of 10 years of full-time state service (generally continuous) you will receive
a longevity payment. The amount of longevity payment increases when you complete 15, 20, and 25
year years of service. Longevity schedules appear in your union contract and other pay plans. To qualify,
you must attain the required years of service by April1 or October 1. Longevity payment are also paid
27
in these months. Employees not included in any collective bargaining unit are no longer eligible for
longevity payments.
28
DEDUCTIONS
Federal Income Tax & Social Security Tax
Federal income and Social Security taxes will be deducted from your paycheck in accordance with
federal law.
Connecticut Income Tax
State income tax will be deducted from your paycheck in accordance with state law.
Health Insurance
Health insurance coverage for eligible employees who choose to enroll in the state’s health benefit plan
will be effective the first of the month immediately following the employee’s hire date or date of
eligibility. For example, if you were hired on November 9, you must submit your application within
thirty days; your effective date of coverage would be December 1.
You may extend health and dental coverage to cover your spouse, dependent children under age 26,
and/or disabled children over age 26. Please contact Payroll for enrollment eligibility. Refer to the
Office of State Comptroller’s website for a summary of health insurance options and rates.
You must remain with your insurance carrier until the next open enrollment period, the one time a year
when you can change carriers. You may add a dependent newborn or spouse within one month of the
birth or marriage (please note if adding a new spouse, a marriage certificate is required); other dependent
changes generally are restricted to the open enrollment period. If your spouse’s insurance was terminated
through his/her employer, you may be eligible to add them as a special exception. A letter from the
employer stating insurance has been cancelled will be required. All additions, deletions, or other changes
must be processed through the Payroll Unit.
You must provide documentation of each dependent’s eligibility status at the time of enrollment. It is
your responsibility to notify the Payroll Unit when any dependent is no longer eligible for coverage.
Group Life Insurance
You may purchase term life insurance at group rates. The state pays a portion of this coverage. You
may authorize payroll deductions for this insurance after six months of employment. If you waive
coverage and later decide to enroll, you must apply with medical evidence of insurability and wait for
approval. The amount of life insurance coverage is based on your annual salary and is automatically
adjusted on April 1 and October 1 as your salary increases. Contact the Payroll Unit to obtain forms or
arrange for beneficiary changes. You may visit the Office of State Comptroller’s website
(https://carecompass.ct.gov/supplementalbenefits/) for more information.
Supplemental Benefits
The state offers various supplemental benefits to qualified employees and retirees, which are designed
to complement the benefits provided by the state. These benefits are on a voluntary basis and are paid
entirely by the employee through the convenience of payroll deduction. Available supplemental benefits
29
are listed on the OSC website Supplemental Benefits - Care Compass (ct.gov). Contact the authorized
vendors for information and assistance with the enrollment process.
Direct Deposit
You may deposit your paycheck in a checking or savings account in a financial institution that is a
member of the automated clearinghouse. Your funds will be electronically transmitted and available to
you after 9:00 a.m. on the date of the check. You must complete an authorization form to adjust or
cancel direct deposit. Authorization forms can be obtained from Payroll.
Deferred Compensation
Permanent employees who work more than 20 hours a week are eligible for the state’s deferred
compensation plan. Through payroll deduction, you may set aside a portion of your taxable wages (prior
to tax deferrals). The minimum contribution is $20 per pay period. Obtain details by contacting the
plan administrator.
State Employees Campaign
Through the state employee campaign, you may contribute to your choice of a range of service
organizations via payroll deduction.
Union Dues
As a member of a collective bargaining unit, you may elect to join the union and have union dues
deducted from your check. Your union determines the amount by using a set-rate or sliding-scale
formula based on the amount of your salary.
Credit Unions
As an agency employee, you may join the, CT Labor Department Federal Credit Union, 200 Folly Brook
Blvd., Wethersfield, CT 06109 (telephone 860-263-6500).
As a State of Connecticut employee, you may also join the CT State Employees Credit Union. Offices
are as follow:
84 Wadsworth Street
Hartford, CT 06106
860-522-5388
1244 Storrs Road
Storrs, CT 06268
860-429-9306
2434 Berlin Turnpike
Newington, CT 06111
860-667-7668
401 West Thames Street Southbury Training School
Norwich, CT 06360
Southbury, CT 06488
860-889-7378
203-267-7610
1666 Litchfield Turnpike
Woodbridge, CT 06525
203-397-2949
Silver & Holmes Street
Middletown, CT 06457
860-347-0479
30
Retirement Tiers
The state and collective bargaining units negotiate the pension agreement. The retirement system
includes five plans: Tier I, II, IIA, III and IV. For details, contact Office of the State Comptroller’s at
[email protected] or consult the specific retirement booklet for which you are a member. Online copies
are available at the OSC website Retiree Resources (ct.gov).
Tier I. Usually, you are member of this retirement plan if you were hired on or before July 1, 1984 and
contribute by payroll deduction to your pension. You may retire at age 55 with 25 years of service, or
at age 65 with 10 years of service, or retire early at age 55 with 10 years of service – at a reduced rate.
This tier is divided into three plans. Members of Plans A and C contribute five percent of salary toward
retirement. Members of Plan A have chosen not to participate in the Social Security plan; Plan C
members pay Social Security taxes and are eligible for Social Security benefits. Plan B members
contribute two percent of salary toward retirement until they reach the Social Security maximum, and
five percent of salary above the maximum; they will receive reduced pensions when Social Security
payments begin. You also may purchase periods of service for which you have not made contributions:
war service, prior state service, and leaves of absence for medical reasons.
Tier II. If you were hired into state service from July 2, 1984 to June 30, 1997, you are automatically
covered under this noncontributing plan. If you were employed by the state on or before July 1, 1984,
and were not a member of any other state retirement plan, the Tier II plan also covers you. You
contribute two percent of your salary towards retirement. You are eligible for normal retirement benefits
after you attain: (1) age 60 with at least 25 years of vesting service; (2) age 62 with at least 10, but less
than 25 years of vesting service; or (3) age 62 with at least five years of actual state service. If you have
at least 10 years of service, you can receive retirement benefits – at a reduced rate – if you retire on the
first day of any month following your 55th birthday. Retirements on or after July 1, 2022 are subject to
the age and years of service specified in the SEBAC 2011 agreement.
Tier IIA. If you entered state service from July 1, 1997 to June 30, 2011, you are covered under this
plan as of the date of your employment. You contribute two percent of your salary towards retirement
have the same options and benefits as a Tier II employee. If you are not eligible for any retirement
benefits when you leave state service, you may withdraw your retirement contributions. You also may
purchase periods of service for which you have not made contributions: war service and leaves of
absence for medical reasons.
Tier III. This plan covers employees hired on or after July 1, 2011 to July 30, 2017. As a Tier III
member, you contribute two percent of your total annual salary. Your normal retirement date is the first
of any month on or after you reach age 63 if you have at least 25 years of service, or age 65 with at least
10, but less than 25 years of service. If you have 10 years of vesting service, you can receive early
retirement benefits on the first of any month following your 58th birthday. If you are not eligible for any
retirement benefits when you leave state service, you may withdraw your retirement contributions.
31
Tier IV. This plan covers employees hired on or after July 31, 2017. The Tier IV retirement plan
provides elements of both a defined benefit and defined contribution plan. Defined Benefits –
Participants that satisfy the minimum eligibility criteria will qualify for a pre-defined monthly retirement
income for life, with the amount being determined by years of service, retirement age and Final Average
Earnings. You contribute 7% of your annual salary (this rate is for fiscal year July 2023 through June
2024). Defined Contribution – You contribute 1% to a defined contribution plan with a 1% employer
match. This plan also has a risk sharing component wherein for any given year the employee
contribution can be up to 2% higher depending on the plan’s performance for the previous year. This
contribution will be computed by the plan’s actuaries. (You may also contribute to a 457 plan). For
additional information please see the State Comptroller’s Retirement Resources website . Please note:
If you were a former state employee who contributed to a different state retirement plan, please contact
Payroll 860-263-6195 or [email protected] to see if you qualify to be placed into a different retirement
plan.
32
Separation
Resignation
The personnel regulation on resignation reads: “An employee in the classified service who wishes to
voluntarily separate from state service in good standing shall give the appointing authority at least two
working weeks written notice of resignation, except that the appointing authority may require as much
notification as four weeks if the employee occupies a professional or supervisory position.”
If you resign, your written notice must include your last day of work and be submitted to your supervisor
at least two weeks before you leave. You will receive a lump-sum payment for unused vacation time if
you are a permanent employee. You may arrange to continue your health insurance benefits at the
COBRA rate for a specific period of time. Contact Payroll for details on the length of coverage and
payment amount. If you are not eligible for any retirement benefits when you leave state service, you
may withdraw your retirement contributions. If you do not return to state service within five years and
have not withdrawn your contributions, the Retirement Division will send you a refund application.
After you complete the form and return it, you will receive your contributions plus interest. If the
Retirement Division cannot locate you within 10 years after your employment ends, your contributions
will become part of the retirement fund.
If you submit your resignation less than two weeks before leaving, your separation may be regarded as
not in good standing and may affect your re-employment rights. An unauthorized absence of five or
more working days also will be considered as a resignation not in good standing. You will be notified
if your resignation is considered as not in good standing and you may file an appeal with the
Commissioner of the Department of Administrative Services.
Layoff
The state defines a layoff as an involuntary, non-disciplinary separation from state service resulting from
a lack of work, program cutback or other economic necessity. Consult your union contract for
particulars. If you are an exempt employee, consult Sec. 5-241 of the Connecticut General Statutes.
Reemployment Rights
In an effort to deliver services in a contemporary and cost effective fashion, the State of Connecticut
uses a module called Freenames through the Online Employment Center (JobAps) as a platform for
processing the following:
•
•
•
Mandatory rights for eligible individuals (reemployment/SEBAC/other mandatory rights)
Statewide Transfer requests (non-mandatory transfers)
Rescind of Resignation or Retirement requests
This section applies to:
Current or former State Employees who have been affected by the following:
•
•
•
Layoffs
Noticed for layoff
Accepted a demotion in lieu of layoff
33
•
•
•
•
•
•
Notified of eligibility for mandatory rights
Recently failed a working test period and has permanent classified status
Exercising rights to return to the classified service from the unclassified service
Recently separated NP-2 employee with Article 39 Rights
Current employees who request to place their names on a Statewide Transfer list
Former employees who request to rescind their resignation in good standing or voluntary
retirement.
If you retire from state service, you are eligible for temporary employment in any class in which you
had permanent status. As a re-employed retiree, you may work as many as 120 days per calendar year
(bases on 40 hours per week prior to retirement) without adversely affecting your pension. Such
appointments are totally at the discretion of the agency.
Rescind of Resignation or Retirement
If you have permanent status and resign in good standing, you may, within one year of the date of your
separation, request to rescind your resignation by completing the Rescind Resignation request via the
JobAps, Freenames Application within one year from date of resignation. This will enable you to be
considered for any classes in which you had permanent status. Reinstatement is strictly voluntary on the
part of the Agency and may occur at any time up to two years from the date of your separation.
Former employees shall be fully independent in and responsible for conducting their own search for
reinstatement by requesting rescind privileges via the JobAps, Freenames Application.
Use the rescind of resignation or retirement option to request to rescind a resignation in good standing,
or a retirement from state service in accordance with DAS General Letter 177.
Note: There are no reemployment rights associated with a rescind of resignation. The State of
Connecticut is not required to rehire individuals who rescind resignation. Rather, certain privileges may
be granted depending on the job class and effective date of rehire.
Requirements
A former State employee must meet the following conditions:
• Attained permanent status as a State employee
• Separated from state service in good standing from a position in the Classified service or a
bargaining unit position in the Unclassified service
• You must know the job class you resigned or retired from. To locate this information, contact
your former Human Resources Representative or refer to your last paycheck as an active
employee.
• You must include each job code matching your last held title including different hourly
equivalent. For example:
7603EU= Information Technology Analyst 1 (35 hours)
7603FD= Information Technology Analyst 1 (40 hours)
DAS will conduct a review and approve or deny all rescind requests for any or all job classes identified.
Applicants will be notified of the status of their requests via email. Please be sure to keep your contact
information updated and check your email and spam folders often as most communication will occur
via email.
34
For detailed instructions to request to rescind a resignation in good standing or retirement, refer
to Instructions Rescind Resignation or Retirement.
Exit Interview
Below you will find the link and QR code to access a confidential exit interview survey. Thank you for
taking the time to engage in the exit interview process. This survey will only take approximately three
minutes to complete. The information collected will help us evaluate factors like pay, benefits, work
environment, and your overall work experience. All your answers are confidential, so please be candid
with your responses. The information collected will help us to identify any potential areas where we can
implement new strategies to increase the satisfaction of our workforce. Thank you again for your time
and atention.
Link to survey: Confidential Exit Survey State of Connecticut - DAS (office.com)
QR code:
35
Retirement
Retirement Types
State employees are members of one of several retirement programs. Once an employee has completed
the required actual or vesting service required by the retirement system, he/she is eligible for a pension.
Retirements are effective on the first of the month following the last working day of the previous month.
For retirement purposes, an employee who is on prolonged sick leave will retire the first of the month
following the last working day that sick leave was used in the previous month (a medical certificate is
required) and may qualify for a disability retirement. Types of retirement include, Normal, Early,
Hazardous Duty or Disability. If you plan to retire you must send your Notice of Intent to Retire and
Retirement
Information
Form
via
fax
to
860-622-4928
or
via
email
to
[email protected]. Please refer to the Plan Summary which can be found on the
Office of the State Comptroller’s website at Retiree Resources (ct.gov).
Regardless of the type of separation from service; on the last day of work, the terminating employee
must return State property to her or his supervisor.
Pension Payment Options
Option A - 50% Spouse: This option will pay you a reduced benefit for your lifetime in exchange for
the protection that, should you pre-decease your spouse, the state will continue to pay 50% of your
reduced benefit for your spouse's lifetime.
Option B - 50% or 100% Contingent Annuitant: This option provides you a reduced monthly benefit
for your life and allows you to guarantee lifetime payments after your death to a selected beneficiary.
After your death, a percentage of your reduced benefit, either 50% or 100%, whichever you choose, will
continue for your beneficiary’s life.
Option C - 10 Year or 20 Year Period Certain: This option provides you a reduced monthly benefit
for your lifetime in exchange for the guarantee that monthly benefits will be paid for at least 10 or 20
years from your retirement date (whichever you choose).
Option D - Straight Life Annuity: This option pays you the maximum monthly benefit for your
lifetime only. All benefits will end upon your death, including state-sponsored health insurance for any
surviving eligible dependents.
Insurance Benefits
You must meet age and minimum service requirements to be eligible for retiree health coverage. Service
requirements vary. For more about eligibility for retiree health benefits, contact the Retiree Health
Insurance Unit at 860-702-3533.
Regardless of the retirement option you choose, you will receive a monthly pension for the rest of your
life, and, if you qualify for health insurance benefits, coverage will extend to your eligible dependents.
Once you or your dependents become eligible for Medicare, this is your primary medical plan provider
and the state plan is supplementary.
36
If you retire with at least 25 years of service and have state-sponsored life insurance, the state will pay
for 50 percent of the amount of coverage (at least $7,500) as when employed. If you retire with less than
25 years of service, the state will pay a prorated amount. The Group Life Insurance Section of the
Retirement Division will contact you following your retirement concerning conversion options.
Disability retirement and pre-retirement death benefits are a part of your pension agreement. Pensions
also are subject to cost-of-living increases as outlined in the agreement.
For further information regarding retirement benefits call or email:
Office of the State Comptroller Retirement Division
165 Capitol Avenue
Hartford, CT 06106
Telephone: (860) 702-3490
Email: [email protected]
37
TRAINING & DEVELOPMENT
In-Service Training
You may apply for Department of Administrative Services in-service training courses. Courses should
be relevant to your position or career mobility, or to your unit’s operational needs. They are generally
held during regular work hours in the spring and fall. Supervisor approval is required. For information,
contact Employee and Organizational Development.
Management Development Courses
A calendar of courses focusing on leadership, supervisory and management development, strategic
planning, customer service skills and total quality management techniques is distributed twice a year.
Contact Employee and Organizational Development for particulars.
Tuition Reimbursement
You may seek tuition reimbursement from the state for courses taken during non-working hours at
colleges, universities, technical schools or other accredited educational institutions. You do not need
supervisory approval. Eligibility and funding provisions are outlined in your union contract if you are a
bargaining unit employee.
As a non-exempt employee, you may be reimbursed for a non-credited course through your union.
Convert course hours to credits. For example, 6-14 hours equal one credit for tuition reimbursement;
15-29 hours, two credits; and 30-44, three credits.
As a manager, you are eligible for tuition reimbursement from the State Management Advisory Council
or agency funds.
As a non-managerial confidential employee, you may apply for reimbursement in accordance with the
union contract that would have included your job classification had your class not been excluded. For a
fall semester class, you must document by Feb. 1 that you paid for a course and passed it, and by June 1
for a spring semester class.
Forms and assistance are available through Employee and Organizational Development. You must
submit your application to that unit at Central Office, 200 Folly Brook Blvd., Wethersfield, CT 061091114, at least two weeks before the start of a class.
Conferences, Workshops & Seminars
Your union contract may pay costs associated with conferences, workshops or seminars such as
registration fees, travel expenses and meals. You must receive supervisory approval before processing
a payment request. Consult you union contract for details.
38
EMPLOYMENT POLICIES
(Ctrl + Click to follow links below)
Acceptable Use of State Systems Policy - Statewide (2019)
ADA Reasonable Accommodation Policy
Affirmative Action Policy Statement – DOL (2023)
AIDS Policy – DOL (7/16/2012)
Background Check Policy and Procedures – DOL (10/31/2022)
Disposition of Public Records Policy – DOL (11/28/2011)
Discrimination and Illegal Harassment Prevention Policy – DOL (April 2023)
Drug Free Workplace State Policy – DOL (7/16/2012)
Employee Conduct Policy – DOL (8/3/2018)
Employee Dependability Policy – DOL (7/16/2012)
Employee Discipline Policy – DOL (7/16/2012)
Ethical Conduct Policy – DOL (8/2013)
Family Violence Leave Policy – Statewide GL 34 (1/2022)
Federal Family & Medical Leave Act – DOL (7/16/2012)
Health and Safety Policy – DOL (7/16/2012)
Internal Discrimination Complaint Procedure – DOL (4/18/2023)
Internal Security Standards - DOL
Office Automation Policy, Standards and Guidelines – DOL (7/16/2012)
Personal Wireless Device Policy (Rev. 9/9/2020)
Phone Use Policy (Rev. 4/23/2023)
Policy for DOL Facility Occupancy – DOL (7/9/2020)
Professional Image Policy – DOL (3/1/2023)
Prohibition of Weapons in DOL Worksites Policy – DOL (8/10/16)
Public Officials and State Employees Guide to the Code of Ethics - Statewide 07/16/2012
Software Anti-Piracy Policy – DOL (7/16/2012)
Vehicle-Use-for-State-Business-Policy--DAS-General-Letter-115--April-1-2012.pdf (ct.gov)
Violence in the Workplace Prevention – DOL (4/2012)
Workers Compensation Rights Responsibilities and Claims (ct.gov)
Workplace Incident Report and Footprints Instructions – DOL (2015)
**Please refer to online Employee Handbook for link activation.
39
| Use information from the article only to explain your answer. Do not rely on outside knowledge.
What does it mean that the Dept of Labor is hiring for an intermittent employment position?
EMPLOYEE HANDBOOK
Table of Contents
Welcome................................................................................................................................................... 5
About the Agency .................................................................................................................................... 6
Mission Statement ................................................................................................................................... 6
Supersedence ........................................................................................................................................... 6
General Highlights .................................................................................................................................. 7
Access Card ............................................................................................................................................. 7
Affirmative Action/Equal Employment Opportunity Employer ....................................................... 7
Americans with Disabilities Act ............................................................................................................ 7
Appearance & Dress Code ..................................................................................................................... 7
Building Security .................................................................................................................................... 7
Code of Ethics ......................................................................................................................................... 7
Collective Bargaining ............................................................................................................................. 7
Email & Internet Use.............................................................................................................................. 8
Employee Assistance Program .............................................................................................................. 8
Employee Background Check ............................................................................................................... 8
Employment Applications ...................................................................................................................... 8
Equal Employment Opportunity........................................................................................................... 8
Immigration Law Compliance............................................................................................................... 8
On-the-Job Accident/Illness ................................................................................................................... 9
Photo Identification ................................................................................................................................ 9
Political Activity ...................................................................................................................................... 9
Rideshare ................................................................................................................................................. 9
Safety ........................................................................................................................................................ 9
Sexual Harassment ................................................................................................................................. 9
Smoking ................................................................................................................................................. 10
Standards of Conduct ........................................................................................................................... 10
Telephones - Cellular Telephones ....................................................................................................... 10
Travel ..................................................................................................................................................... 10
Uniformed Services Employment & Reemployment......................................................................... 10
Violence in the Workplace ................................................................................................................... 10
Visitors ................................................................................................................................................... 11
Weather & Emergency Closings ......................................................................................................... 11
Collective Bargaining ........................................................................................................................... 12
Bargaining Unit Representation .......................................................................................................... 12
Union Contracts .................................................................................................................................... 12
2
Grievance Procedure ............................................................................................................................ 12
Appointment and Promotion ............................................................................................................... 14
Merit System ......................................................................................................................................... 14
Job Classification .................................................................................................................................. 14
Classified & Unclassified Positions ..................................................................................................... 14
Competitive & Non-Competitive Positions ........................................................................................ 14
Scheduled & Continuous Recruitment Job Announcements ........................................................... 14
Job Announcements.............................................................................................................................. 14
Employment Opportunities ................................................................................................................. 15
Application Accommodations for People with Disabilities ............................................................... 15
Rejection from State Application ........................................................................................................ 15
Appointment Types .............................................................................................................................. 15
Working Test Period ............................................................................................................................ 16
Service Ratings ...................................................................................................................................... 17
Promotion & Reclassification .............................................................................................................. 17
Temporary Service in a Higher Class ................................................................................................. 17
Transfers ................................................................................................................................................ 18
Dual Employment ................................................................................................................................. 18
Personnel Records ................................................................................................................................ 19
Personnel Files ...................................................................................................................................... 19
Change of Personal Data ...................................................................................................................... 19
Working Hours ..................................................................................................................................... 19
Meal & Break Periods .......................................................................................................................... 20
Overtime & Compensatory Time ........................................................................................................ 20
Shift Assignments.................................................................................................................................. 20
Attendance ............................................................................................................................................. 20
Paid Leave Time ................................................................................................................................... 21
Holidays ................................................................................................................................................. 21
Sick Leave .............................................................................................................................................. 21
Vacation Leave ...................................................................................................................................... 22
Personal Leave ...................................................................................................................................... 23
Jury Duty ............................................................................................................................................... 23
Military Leave ....................................................................................................................................... 24
Leave Without Pay ............................................................................................................................... 25
Leave of Absence Without Pay (LAW) ............................................................................................... 25
Maternity Leave .................................................................................................................................... 25
3
Medical Leave ....................................................................................................................................... 25
Family Leave ......................................................................................................................................... 26
Salary ..................................................................................................................................................... 27
Payment ................................................................................................................................................. 27
Payday .................................................................................................................................................... 27
Annual Increments ............................................................................................................................... 27
Collective Bargaining & Cost-of-Living Increases ............................................................................ 27
Longevity Pay ........................................................................................................................................ 27
Deductions ............................................................................................................................................. 29
Federal Income Tax & Social Security Tax ....................................................................................... 29
Connecticut Income Tax ...................................................................................................................... 29
Health Insurance ................................................................................................................................... 29
Group Life Insurance ........................................................................................................................... 29
Supplemental Benefits .......................................................................................................................... 29
Direct Deposit ........................................................................................................................................ 30
Deferred Compensation ....................................................................................................................... 30
State Employees Campaign ................................................................................................................. 30
Union Dues ............................................................................................................................................ 30
Credit Unions ........................................................................................................................................ 30
Retirement Tiers ................................................................................................................................... 31
Separation .............................................................................................................................................. 36
Resignation ............................................................................................................................................ 33
Layoff ..................................................................................................................................................... 33
Reemployment Rights .......................................................................................................................... 33
Rescind of Resignation or Retirement ................................................................................................ 33
Exit Interview ........................................................................................................................................ 33
Retirement ............................................................................................................................................. 33
Retirement Types .................................................................................................................................. 36
Pension Payment Options .................................................................................................................... 36
Insurance Benefits ................................................................................................................................ 36
Training and Development .................................................................................................................. 38
In-Service Training ............................................................................................................................... 38
Management Development Courses .................................................................................................... 38
Tuition Reimbursement ....................................................................................................................... 38
Conferences, Workshops & Seminars ................................................................................................ 38
EMPLOYMENT POLICIES ............................................................................................................... 39
4
Welcome
Whether you have just joined the agency or have been with us for a while, we are confident that you will
or have found our organization to be a dynamic and rewarding place in which to work. We consider the
employees of the Department of Labor to be our most valuable resource and we look forward to a
productive and successful partnership.
This handbook has been prepared for you to serve as a guide for the employer-employee relationship.
The topics covered in this handbook apply to all employees of the Department of Labor. It is important
to keep the following things in mind about this handbook.
First, it contains general information and guidelines. It is not intended to be comprehensive or to address
all the possible applications of, or exceptions to, the general policies and procedures described. It is not
intended to replace or supersede collective bargaining agreements that may cover many of your terms
and conditions of employment. Employees covered by a collective bargaining agreement will receive a
copy of their contract at orientation. You should read and become familiar with your collective
bargaining agreement, this employee handbook and the agency’s employment policies. If you have any
questions concerning eligibility for a particular benefit, or the applicability of a policy or practice, you
should address your specific questions to your supervisor or contact your HR Generalist for clarification.
Second, neither this handbook nor any other agency document confers any contractual right, either
expressed or implied, to remain in the agency’s employ or guarantee any fixed terms and conditions of
your employment.
Third, the policies, procedures, and benefits described here may be modified or discontinued from time
to time. We will try to inform employees of any changes as they occur but cannot guarantee immediate
advance notice of changes.
Finally, some of the subjects described here are covered in detail elsewhere. The terms of written
insurance policies and/or plan documents are controlling for health, life, retirement and deferred or
reduced income benefits. You should refer to those documents for specific information since this
handbook is only designed as a brief guide and summary of policies and benefits.
We are pleased to have you as a member of the Department of Labor and look forward to a successful
and beneficial association.
5
About the Agency
The Department of Labor handles far more than unemployment insurance benefits. Helping employers
and jobseekers with their workforce needs is our goal. An overview of the many programs and public
services the agency offers is available on the website (www.ct.gov/dol), which also contains information
ranging from upcoming job fairs to wage and workplace guidelines.
Mission Statement
The Department of Labor is committed to protecting and promoting the interests of Connecticut workers.
In order to accomplish this in an ever-changing environment, we assist workers and employers to become
competitive in the global economy. We take a comprehensive approach to meeting the needs of workers
and employers, and the other agencies that serve them. We ensure the supply of high-quality integrated
services that serve the needs of our customer.
Supersedence
This revised version of the Employee Handbook supersedes all prior versions that have been issued by
the Department of Labor and will be effective April 2023.
6
General Highlights
Access Card
Central Office and Annex employees are issued an access card to enter the building. Should your card
be lost, stolen or destroyed, contact Facilities Operations so the card can be deactivated and a
replacement issued.
Affirmative Action/Equal Employment Opportunity Employer
The Department of Labor is committed to affirmative action/equal employment that will build on the
strengths of our current workforce and continually enhance the diversity of our organization. The
department opposes all forms of discrimination and has developed a set of anti-discriminatory policies.
Please direct your questions about affirmative action issues to the AA/EEO Manager at Central Office,
200 Folly Brook Boulevard, Wethersfield, CT 06109; telephone (860) 263-6520. To file a complaint,
please click on the link to access the form: Internal Discrimination Complaint
Americans with Disabilities Act
The Department of Labor complies with all relevant and applicable provisions of the Americans with
Disabilities Act (ADA). The agency will not discriminate against any qualified employee or job
applicant with respect to any terms, privileges, or conditions of employment because of a person’s
physical or mental disability. See the Americans with Disabilities Act Reasonable Accommodation
Policy
Appearance & Dress Code
It is the policy of the agency to project a business-like image to clients, visitors and co-workers. In line
with this, you are required to dress appropriately in clothing which is suitable for your job responsibilities
and work environment, meets the requirements established for safety reasons, and complies with the
agency’s dress code requirements. See Professional Image Policy.
Building Security
Each and every employee must follow the building security rules and regulations. Employees are not
allowed on the property after hours without prior authorization from their supervisor.
Code of Ethics
The department’s standards of ethical conduct, which all employees are expected to be familiar with and
observe, are outlined in the Code of Ethics for Public Officials & State Employees and the Ethical
Conduct Policy .
Collective Bargaining
Your assignment to a collective bargaining unit (union) is based on your job classification. As a
bargaining unit member, you will have union dues deducted from your bi-weekly paycheck. You may
elect not to join a union. Your union contract governs salary, benefits and hours of work, and other
terms and conditions of employment. Collective bargaining agreements are negotiated periodically.
7
Exempt employees are excluded from the collective bargaining process and are not required to pay union
dues.
Email & Internet Use
It is the policy of the agency to provide electronic mail (email) and internet access for work-related
purposes. You are required to adhere to this and related policies to ensure proper, legal and effective
use of these electronic tools and resources. See Acceptable Use of State Systems Policy.
Employee Assistance Program
The Employee Assistance Program (EAP) is designed to offer consultation and counseling services for
employees and their dependents who are experiencing problems which may be impacting their life at
work and/or at home. Some of these problems may include family, marital, alcohol/drugs, emotional
distress, and job-related, legal, or financial difficulties. Participation is voluntary and confidential. EAP
services are provided by Wheeler EAP. To schedule an appointment or obtain more information, call 1800-252-4555 or 1-800-225-2527, or log on to their website at Wheeler EAP.
Employee Background Check
Prior to making an offer of employment, Human Resources may conduct a job-related background check.
A comprehensive background check may consist of prior employment verification, professional
reference check, education confirmation and fingerprinting.
Employment Applications
We rely upon the accuracy of information contained in an employment application and the accuracy of
other data presented throughout the hiring process and employment. Any misrepresentation, falsification
or material omission of information or data may result in exclusion of the individual from consideration
for employment or, if the person has been hired, termination of employment.
Equal Employment Opportunity
The Department of Labor is an equal employment opportunity employer. Employment decisions are
based on merit and business needs. The Department of Labor does not discriminate on the basis of race,
color, citizenship status, national origin, ancestry, gender, sexual orientation, age, religion, creed,
physical or mental disability, marital status, veterans’ status, political affiliation, or any other factor
protected by law. To file a complaint, please click on the link to access the form: Internal Discrimination
Complaint.
Immigration Law Compliance
All offers of employment are contingent on verification of the candidate’s right to work in the United
States. On the first day of work, every new employee will be asked to provide original documents
verifying his or her right to work and, as required by federal law, to complete and sign an Employment
Eligibility Verification Form I-9.
8
On-the-Job Accident/Illness
The agency promotes safety in the workplace. The State of Connecticut also has implemented a
Managed Care Program for Workers’ Compensation, administered by Gallagher Bassett Services, Inc.
You must report a work-related accident or illness to your supervisor, who is required to call a 24-hour
hotline (1-800-828-2717) to report your accident or illness and initiate a claim. If your supervisor is
unavailable, you may call or have someone call for you. Your supervisor must also complete the First
Report of Injury (Form WC-207) and submit it to [email protected] or by fax to 959-200-4841,
whether or not you seek treatment or lose time from work. To become eligible for workers’
compensation benefits, you must seek treatment from a network physician or medical facility. Forms
can be obtained at Workers' Compensation Rights, Responsibilities, and Claims--Documents (ct.gov).
In cases of a medical emergency call 911 to seek immediate medical attention. Contact the DAS Workers'
Compensation Division at (860) 713-5002 with any questions regarding access.
Photo Identification
You are required to wear and visibly display a photo identification badge during working hours. If your
identification badge is lost, stolen, or destroyed, or you have transferred to a different unit, you must
request a replacement through Facilities Operations.
Political Activity
As a state employee, state statutes govern your involvement in various political activities such as
campaigning and running for elective office. Also, if you are working on programs financed in whole
or in part by federal funds, you are subject to the provisions of the federal Hatch Act, which is generally
more restrictive than state statue. The purpose of these laws is to avoid a conflict of interest between
your state job and political activities. Information regarding political activity may be found in DAS
General Letter 214D, link to document General Letter 214D – Political Activity. The Ethical Conduct
Policy also addressed these issues and you are advised to contact the agency’s Ethics Liaison regarding
any political activity. See Ethical Conduct Policy.
Rideshare
The department promotes the statewide Rideshare Program, an opportunity to reduce your transportation
expenses to work. Consider using a ride-sharing mode (carpool, vanpool or bus) as an alternative to
driving alone. Ride sharing saves you money, energy and preserves the environment. For information
call 800-972-EASY (800-972-3279) or visit the website at www.rideshare.com.
Safety
The safety and health of employees is our top priority. The agency makes every effort to comply with
all federal and state workplace safety requirements. Each employee is expected to obey safety rules and
exercise caution and common sense in all work activities. Promptly report safety concerns to your
supervisor.
Sexual Harassment
The Department of Labor does not tolerate sexual harassment. Sexual harassment may include
unwelcome sexual advances, requests for sexual favors, or other unwelcome verbal or physical contact
9
of a sexual nature when such conduct creates an offensive, hostile and intimidating work environment
and prevents an individual from effectively performing the duties of their position. See Sexual
Harassment Policy.
Smoking
Smoking is prohibited throughout agency buildings and offices, including in rest rooms, private offices,
lounges and similar areas. Smoking is permitted only in designated areas outside office buildings and
other work locations. The use of smokeless tobacco and e-cigarettes are also prohibited and subject to
the same restrictions.
Standards of Conduct
The work rules and standards of conduct for employees are important and the agency regards them
seriously. All employees are urged to become familiar with and must follow these rules and standards.
See Employee Conduct Policy.
Telephones - Cellular Telephones
The agency recognizes that occasionally it is necessary for employees to make or receive personal
telephone calls during working hours. You are expected to restrict your personal telephone usage, both
on state-owned phones and personally owned cellular phones, to reasonable, incidental calls that do not
interfere with your work schedule or the performance of your duties. To avoid being disruptive to others
in the workplace, please make certain audible alerts are disabled.
Travel
Your position may require travel to conduct state business. If you are required to travel for work, you
may obtain a state-owned vehicle from a central carpool with a valid driver’s license. Use of your
personal vehicle in the performance of Agency duties is allowable only when the use of a State-owned
vehicle is not reasonably available for use and request mileage reimbursement. You must present proof
of automobile insurance with the minimum coverage requirements. Contact your supervisor or Business
Management if you have any questions.
Uniformed Services Employment & Reemployment
As an equal opportunity employer, the Department of Labor is committed to providing employment and
reemployment services and support as set forth in the Uniformed Services and Reemployment Rights
Act of 1994 (USERRA).
Violence in the Workplace
The Department of Labor has a policy prohibiting workplace violence. Consistent with this policy, acts
or threats of physical violence, including intimidation, harassment and/or coercion, which involve or
affect the organization and its employees will not be tolerated. See Violence in the Workplace
Prevention Policy.
10
Visitors
To provide for safety and security, only authorized visitors are allowed in the workplace. All visitors
must enter through the main reception area, sign-in and sign-out at the front desk and receive a visitor
identification to wear while on the premises. Authorized visitors will be escorted to their destination
and must be accompanied by an employee at all times.
Weather & Emergency Closings
At times, emergencies such as severe weather or power failures can disrupt business operations.
Everbridge, is a system that the state utilizes to notify enrolled individuals on safety and weather
concerns. You can determine by which methods you want to be notified. Sign-up is free. Any personal
information provided (such as cell number) will be used for important employee notifications purposes
only directed by DAS. Everbridge will never give or sell contact or location information to any vendor
or other organization.
The Department of Emergency Service & Public Protection website is the official source of information
for state employees. Use this page to find any official announcements about closures or delayed
openings that have been declared by the Governor.
Everbridge system can send alerts to your work phone and email as well as your home phone, cell phone,
and home email.
The Statewide CT Alert system can also keep you informed of state emergencies and send you emails
and text alerts.
FEMA’s Ready.gov preparedness site has information on how to keep safe during the winter.
11
Collective Bargaining
Bargaining Unit Representation
Labor unions and management at times negotiate collective bargaining agreements (union contracts).
The contracts govern such areas as salary, benefits, hours of work, and the terms and conditions of
employment. Most state job classifications have been assigned to particular bargaining units (unions)
and state employees have voted to have unions represent them in the negotiation process.
If you are a nonexempt employee, you have been assigned to a bargaining unit based on your job
classification and will be represented by that specific union. If you are an exempt employee, you have
been excluded from the collective bargaining process. The terms and conditions of your employment
will be governed by state statutes, rules and regulations.
Union Contracts
Union contracts, established through the formal negotiation process, outline the terms and conditions of
your employment. You should familiarize yourself with your contract. Benefits and provisions vary
between bargaining units. Contract language has been crafted to avoid disputes and eliminate
misunderstandings. Contract provisions, however, may be open to interpretation and subject to the
grievance and arbitration process. Direct your questions about your union contract to your supervisor,
union representative or Human Resources Generalist.
Grievance Procedure
Your problems or complaints should be resolved quickly and fairly. First, discuss the issue with your
supervisor, who may help you find a solution. If your supervisor or another employee in the chain of
command cannot resolve your problem or complaint, or if you feel that you have been treated unjustly,
contact your union steward or Agency Labor Relations Specialist. If an issue cannot be resolved
informally, you may follow the grievance procedure outlined in your union contract. This procedure
helps resolve disputes concerning the interpretation and application of a contract. You should, however,
make every effort to resolve an issue before filing a grievance. Though specific procedures may vary,
your union contract establishes time limits for initiating grievances and obtaining responses.
The first steps of the grievance process are informal to encourage quick resolution. If an issue still cannot
be resolved, more formal meetings are conducted until the grievance reaches the highest level of the
process. Most grievance procedures permit arbitration when an issue cannot be resolved at the highest
level. An arbitrator, an impartial party chosen by the union and management, will hear both sides of an
issue and render a binding decision.
A union normally requests arbitration, but you as an employee may also request it in certain
circumstances. Arbitration is permitted only if negotiated as a step in the grievance procedure.
You or a group of employees may present a grievance to management for resolution without your union’s
participation. However, the resolution must be consistent with your union contract and your union must
be given the opportunity to attend all meetings.
12
If you are an exempt classified employee, you may appeal certain actions through the grievance
procedure as outlined in Sec. 5-202 of the Connecticut General Statutes.
13
Appointment and Promotion
Merit System
The appointment and promotion of state employees is based on the merit principles in the State Personnel
Act. As with other federal, state and municipal merit systems, this system was established to minimize
the influence of electoral politics on the employment and retention of state employees. The system
strives to place the best qualified people in state service and to ensure that they are fairly treated in the
appointment and promotion process. The merit system is not subject to collective bargaining.
Job Classification
The state, as an employer of thousands of people, must systematically describe and group jobs to ensure
consistent and fair treatment when assigning, compensating and promoting employees. Consequently,
it has established a classification plan for all jobs in the executive branch of state service. Individual
positions are grouped into job classes, with each class consisting of positions with similar duties,
responsibilities and required qualifications. Your job classification is the foundation for the employment
process.
Classified & Unclassified Positions
Most positions in the executive branch of state government are classified. Unclassified positions may
be exempt from job announcements. The State Personnel Act lists a number of unclassified categories:
agency heads, members of boards and commissions, officers appointed by the governor, deputies and
executive assistants to the head of departments, executive secretaries, employees in the Senior Executive
Service and professional specialists.
Competitive & Non-Competitive Positions
Most classified positions are competitive and require an application. The type of experience required
depends on the job classification. Applicants must meet minimum general experience and training
requirements, however, to be eligible for appointment if a position requires a professional license or
degree, there may be no additional requirements beyond possession of the professional license or degree.
Scheduled & Continuous Recruitment Job Announcements
Most state job opportunities are announced to the general public with a specific closing date. If you
apply for a job opening, you will be notified if you are selected for an interview by the hiring agency.
When the state considers continuous recruiting necessary, it may postpone the closing date for filing
applications until it receives a suitable number of candidates. A job posting will indicate when recruiting
is continuous and that applications may be filed until further notice.
Job Announcements
To meet merit system objectives, the state has developed competitive job classifications to fill many of
its positions. They are not used to fill unclassified positions or those in classes designated as noncompetitive. State job announcements fall into the following categories:
14
Open to the Public. If you meet the minimum experience and training qualifications for a position, you
may participate in this type of recruitment. Open-competitive job announcements are administered
periodically usually when a state agency is recruiting for a vacant position.
Statewide & Agency Promotion. If you are a state employee who meets the minimum experience and
training qualifications for a position and has completed six months continuous service in a state agency,
you may participate in a statewide recruitment. Agency promotional announcements will have the
additional requirements that you must be a current agency employee.
Employment Opportunities
Agency job announcements are posted on the DAS Online Employment Center. You should check
regularly for the most up to date information.
To apply for employment, you must complete a Master Application on the DAS Website. Check the
state employment pages on the Department of Administrative Services website (Job Openings Department of Administrative Services (jobapscloud.com) for information about completing the
application form, job opportunities, and to sign up for e-mail notification of current job openings.
Application Accommodations for People with Disabilities
The state may conduct recruitments in various ways. If you need special accommodations for a particular
recruitment, you or someone on your behalf should immediately notify the DAS at (860) 713-7463. You
must supply the application title and job number, and a description of your special needs and
documentation of the disability.
Rejection from State Application
Your application for a state job opening may be rejected if (1) your application was received after the
closing date, (2) you did not meet the minimum requirements, (3) your years of experience did not match
the requirements, (4) specific information was missing from your application, (5) you failed to meet the
special requirements for the position, or (6) your years of experience did not match the special
requirements.
Appointment Types
Durational. An employee hired for a specific term, for a reason not provided above, including a grant
or specially funded program, not to exceed one year. A durational employee shall become permanent
after six months, or the length of the working test period, whichever is longer.
Emergency. The state may appoint you to an emergency position to meet short-term agency needs. The
appointment may extend for as long as two months but may not be renewed in a fiscal year.
Intermittent. Intermittent employment is also work on an "as needed" basis. The agency may use
intermittent interviewers to supplement permanent staff in times of high unemployment. They are paid
an hourly rate for time worked and may receive benefits. They are eligible to apply for agency
promotional postings following the completion of 1044 hours of intermittent service.
15
Permanent. The state may appoint you to a permanent competitive position from a certification list.
You must successfully complete the working test period to gain permanent status.
Provisional. The state may provisionally appoint you to a position that must be filled immediately if no
active certification list exists, or an insufficient number of candidates are listed. The appointment may
extend for as long as six months or until a job announcement for the position has been held and a
certification list promulgated. You may not receive more than one provisional appointment in a fiscal
year or serve more than six months as a provisional appointee. Your job performance while a provisional
must be satisfactory. To receive a permanent appointment, you must be appointed from a competitive
process for the position. If you are not appointed from a competitive process and do not have a
permanent position to which you may return, you must be separated from state service. If the competitive
process is not completed for a position within six months, an additional temporary or emergency
appointment may be authorized.
Seasonal. Seasonal employment for a position established for a specific period, usually during summer
months. Individuals employed are paid an hourly rate and are not entitled to any fringe benefits.
Temporary. Position filled for a short term, seasonal, or an emergency situation, including to cover for
a permanent position when the incumbent is on workers’ compensation or other extended leave, not to
exceed 6 months. May be extended up to one year. If a temporary employee is retained greater than 12
months, said employee shall be considered durational.
Working Test Period
The working test period, or probationary period, for a state employee is an extension of the state
recruitment process. You must serve this period to gain permanent status following initial appointment
or promotion. Your initial test period is generally six months, depending on the applicable contract or
state regulation. Your promotional test period is generally four to six months, again depending on the
applicable contract or regulation. Exceptions may occur in the length of the trial period for trainee
positions. Questions about your working test period may be directed to your supervisor or Human
Resources Generalist.
During an initial working test period, you are considered a probationary employee and will work closely
with supervisors and colleagues to learn your duties. This period also gives your supervisor the
opportunity to evaluate your response to training and job requirements. If you demonstrate acceptable
performance during your initial test period, you will be given a satisfactory service rating and gain
permanent status as a state employee.
Your working test period may be extended in certain circumstances. If you do not meet acceptable
performance standards during the initial working test period, you will be separated from state service.
You may not appeal a dismissal during your initial test period through the contractual grievance
procedure, but you may request an administrative review. If you fail to meet acceptable performance
standards during a promotional working test period, you will revert to your previous classification.
16
Service Ratings
You will receive a service rating for your initial working test period or promotional test period, and at
least three months before your annual increase date. Depending on your union contract or state statutes,
you may receive a service rating at any time, particularly when your job performance has changed
significantly.
Service ratings record your progress and performance as training and job experience increase. The state
recognizes satisfactory performance by awarding annual salary increases (as negotiated) until reaching
the maximum step in a salary group. For employees at the maximum step, some bargaining units award
a lump sum payment in lieu of an annual increment. A “less than good” rating may prevent you from
receiving an increase. An “unsatisfactory” during the working test period signifies failure. After
attaining permanent status, two successive “unsatisfactory” ratings may result in your dismissal.
Managers are evaluated in accordance with the provisions of the Performance Assessment and
Recognition System (PARS) Program.
Promotion & Reclassification
Generally, there are two ways in which you may receive an appointment to a higher-level job
classification. First, you may compete for a new position or an opening that arises when another
employee leaves an existing position. The agency may use a formal state employment application
process to obtain a list of candidates to be considered for an opening or it may use a less formal
recruitment and selection process. In either event, in order to be considered you must meet the minimum
qualifications for the higher classification and comply with the application procedures. Recruitment
notices are posted internally on the agency intranet, and at times externally on the Department of
Administrative Services website. It is your responsibility to monitor them and respond according to the
instructions on the job posting.
Additionally, you may progress to a higher level through reclassification. After working for the agency
for some time, you may find that your duties have expanded and are more consistent with a higher-level
job classification. In such cases, your supervisor will ask you to complete a job duties questionnaire,
which will be evaluated by Human Resources. If you are found to be working “out of class,” the agency
has the option of either removing the higher-level duties or reclassifying your position to the higher level.
Certain conditions must be met for reclassification. You must be in your current position for at least six
months, have a rating of “good” or better on your last two performance evaluations and meet the
minimum experience and training requirements for the higher class. If you have applied for a job
opening and did not qualify for the classification, this is evidence that you do not meet the qualifications
for the higher-level class and cannot be considered for reclassification.
Temporary Service in a Higher Class
When a temporary vacancy occurs in a non-entry level classification, such as the result of an employee
being on an extended leave of absence, the agency may fill the opening by temporarily assigning you to
a higher level as long as the assignment lasts for more than 30 days and meets any other relevant union
contract provisions. You must meet the minimum qualifications of the class. While serving in this type
17
of service, you are paid at the higher level, but you retain status in your permanent (lower) classification.
Benefits such as longevity and vacation accrual are based on the permanent class.
Transfers
You may voluntarily transfer within the agency or to another state agency. To place your name on a
Statewide Transfer list, for your current job class in which you hold permanent status, please visit the
DAS Website, Freenames - Department of Administrative Services (jobapscloud.com), scroll down and
follow the process of Statewide Transfers. If your job classification is unique to the agency, your transfer
options will be limited to those classes deemed comparable to the one in which you have permanent
status. Consult your union contract for more information.
If you are interested in transferring to another work location within the agency and meet the eligibility
of the job requirements, Human Resources will send emails periodically with transfer opportunities, to
be considered you must follow the procedures noted on the email.
The agency may involuntarily transfer you under certain circumstances, generally defined in your union
contract or state personnel regulations. Transfers occur for a variety of reasons: when the agency seeks
to better use its resources, to avoid layoffs, to meet emergency or seasonal conditions, or to accommodate
you.
If you are an exempt employee, your transfer is subject to state regulations and the State Personnel Act.
Dual Employment
You may be authorized to work at a secondary agency subject to the dual employment provisions of the
regulations for state agencies. For this to occur, the secondary agency must initiate and complete the
appropriate paperwork. The secondary agency will forward a copy of the dual employment request form
to the primary agency for completion and return. If all provisions are met, subject to any fair labor
standards considerations and the operating needs of the department, you may be eligible for secondary
employment. Secondary employment may not pose a conflict of interest or interfere with the
performance of your job duties and your approved work schedule for the Department of Labor.
18
Personnel Records
Personnel Files
The agency maintains a digital personnel file containing information about your employment: service
ratings; personnel processing forms; appointment, promotion, and disciplinary letters. The agency also
maintains a separate, confidential file that contains your medical documents, including doctor’s notes
and medical certificates.
You may review your digital personnel file by contacting Human Resources. You may sign a waiver to
allow another person, such as a union official, to review your files. The agency must comply with written
requests for information about its employees under the state freedom-of-information law. If the agency
considers an information request to be a possible invasion of your privacy, you will be notified.
Change of Personal Data
Whenever you change your name, address, number of dependents, telephone number, or marital status,
you must promptly notify Payroll so that agency records and files may be updated. You may also need
to complete a new federal or state withholding allowance certificate (W-4 or CT W-4), or various health
insurance forms.
Working Hours
The negotiated workweek for most staff members currently averages 40 hours per week. Some union
contracts provide for a 35 or 37.5-hour workweek. Many employees work a standard schedule of 8:00
a.m. to 4:30 p.m. The agency has also established nonstandard work schedules, which are approved in
advance by the appointing authority in consultation with the Director of Human Resources. Provision
for flex time has been included in some contracts. If your position is covered by flex time or other
nonstandard workweek, your supervisor will explain its operation. The Payroll Unit will maintain your
attendance record.
From time to time and consistent with the terms of the applicable collective bargaining agreement, it
may be necessary to temporarily or permanently change your work schedule to meet operational needs.
In such a situation you will be given as much notice as possible, at a minimum that is required by your
union contract. Regardless of your work schedule, you are expected to arrive at work on time, return
from lunch and breaks on time, and not leave your job prior to quitting time.
19
Meal & Break Periods
Full-time employees are permitted two 15-minute breaks and a 30-minute unpaid meal period. Longer
unpaid meal periods are allowed with supervisory approval. The schedule for all meal and break periods
is determined by your supervisor based on business operations and staffing needs. Your supervisor will
inform you of your schedule and any required changes.
Employees are not permitted to work through lunch to leave early. Breaks do not accumulate, nor may
they be used to start late or leave early.
Overtime & Compensatory Time
Overtime occurs when you work in excess of your regular established weekly schedule. Overtime
assignments must be approved in advance, except in extreme emergencies. The Fair Labor Standards
Act (FLSA), state statutes and regulations, and your union contract govern your eligibility for overtime
and the rate of compensation. Compensatory time is a form of accrued leave time that may be used later;
it does not constitute a basis for additional compensation. Compensatory time must be taken in
accordance with the provisions of your contract and agency policy.
The FLSA may conflict with your union contract regarding compensation for overtime. Generally, you
will be paid by the method that provides the greater benefit. Hours worked in excess of 40 in one week
are generally compensated at the rate of time-and-one half. The time-and-one-half rate is derived from
your basic hourly wage rate. Some employees may be ineligible for the overtime provisions of FLSA.
Questions may be directed to Payroll.
Shift Assignments
Some areas engage in multi-shift operations. Depending on the starting and ending times of your shift
and union contract, you may be eligible for shift-differential payments. These usually take the form of
additional pay for the hours worked on your assigned shift. Generally, any shift that begins before 6:00
a.m. or after 2:00 p.m. is subject to shift-differential payments. Some employees may not be eligible for
these payments, even when assigned to such a shift. Consult your union contract for information
regarding eligibility for the shift and weekend differentials, and the applicable pay rate.
Attendance
You are responsible for maintaining a good attendance record. Frequent absenteeism reduces the level
of your service to the agency and the public, increases operational costs, and places a burden on your
co-workers. Use your accrued leave in accordance with agency policies and procedures and ensure that
you comply with Employee Dependability Policy requirements. You should request leave time as far in
advance as possible. Refer to your union contract for additional guidelines. Agency operating needs,
the reasonableness of the request, and the specific language contained in the union contract govern the
approval or denial of your leave request. Whenever possible, avoid unscheduled leave.
20
Paid Leave Time
Holidays
The state grants 13 paid holidays per year to permanent, full-time employees: New Year’s Day, Martin
Luther King’s Birthday, Lincoln’s Birthday, Washington’s Birthday, Good Friday, Memorial Day,
Juneteenth Day, Independence Day, Labor Day, Columbus Day, Veterans’ Day, Thanksgiving Day and
Christmas Day. Intermittent and durational employees must work the equivalent of six months (1044
hours) to be eligible for holiday pay.
If a holiday falls on a Saturday or Sunday, the state generally designates the Friday preceding or the
Monday following as the day it will be observed. A calendar detailing the exact day of holiday
observance appears on the Human Resources intranet site.
You will be paid for a holiday if you are on the payroll on or immediately before or after the day it is
celebrated; you normally will not receive holiday pay if on a leave of absence without pay before and
after a scheduled holiday. Consult your union contract for information about compensation for work
performed on a state holiday.
Sick Leave
As a permanent employee, you accrue sick leave from your date of employment for each fully completed
calendar month of service, except as otherwise provided in the statutes. You must use sick leave when
incapacitated or in the special cases described in your union contract. Upon exhaustion of sick leave,
you must use other accrued leave in lieu of sick leave unless FMLA rules dictate otherwise. If an
employee is sick while on annual vacation leave, the time will be charged against accrued sick leave if
supported by a properly completed medical certificate.
Sick leave is not an extension of vacation or personal leave. You should maintain a sick leave balance
as a form of insurance in the event of a long-term illness.
Accrual. Full-time employees accrue paid sick leave at the rate of 1¼ days per completed month of
service or 15 days per year. If you are absent without pay for more than forty hours in any month, you
do not accrue sick leave in that month. If you are an eligible part-time employee, you accrue paid sick
leave on a pro-rated basis or on the amount of your scheduled hours as a percentage of a full-time
schedule.
Balances. Payroll records your sick leave balance (time accrued but not used) in hours and minutes.
When you retire, the state will compensate you for 25 percent of your accrued sick leave balance (to a
maximum of 60 days).
Call-In Procedure. If you are unexpectedly absent as a result of injury or illness, you must notify your
supervisor or designee as early as possible, but no later than one-half hour before your scheduled
reporting time. If your absence is continuous or lengthy and you have not been granted a medical leave
21
of absence, you must notify your supervisor on a daily basis. If you fail to call in, you may be placed on
unauthorized leave without pay and subject to corrective action.
Medical Documentation. Your physician must complete a medical certificate if you are absent as the
result of injury or illness for more than five working days or as otherwise outlined in your union contract
or state personnel regulations. If you fail to provide the required medical documentation, you may be
placed on unauthorized leave, which can lead to loss of pay and disciplinary action. Medical certification
forms should be emailed directly to [email protected]. Any questions must be sent
directly to [email protected].
Additional Use of Sick Leave. You may use sick leave for situations other than your own injury or
illness (a medical certificate or written statement supporting a request may be required):
•
•
•
•
•
Medical, dental or optical examination or treatment when arrangements cannot be made outside
working hours.
Death in your immediate family.
Illness or injury to a member of your immediate family.
Funeral for a person other than an immediate family member.
Birth, adoption or taking custody of a child.
To determine the exact number of days allowed, refer to your union contract.
Extended Illness or Recuperation. If you exhaust your accrued sick leave during a prolonged illness
or injury, you may be permitted to use other accrued time. You must obtain approval from your
immediate supervisor for use of other accrued leave to cover the remainder of the absence. In certain
circumstances, you may be granted an advance of sick leave if you have at least five years of full-time
state service. Consult your union contract for information regarding the sick leave bank or donation of
leave time.
If an employee has no accrued leave time available, a written request for a medical leave without pay
must be submitted to [email protected], and the request must be followed up in
writing upon return to work. Failure to do so will result in charging the absence to Sick Leave Without
Pay.
Illness or Injury While on Vacation. If you become ill or injured while on vacation, you may request
that the recovery time be charged to your sick leave rather than to your vacation leave. A medical
certificate or documentation support your request will be required.
Vacation Leave
Usage. As a full-time employee, you may begin taking paid vacation leave after six months of
continuous service. Unless otherwise stated in a union contract, a part-time employee may begin taking
paid vacation after completing the equivalent of six months of full-time service (1044 hours). Requests
for vacation leave are subject to the approval of your supervisor, based on the operating needs of the unit
and the seniority provisions of your contract.
22
Accrual. You accrue vacation leave at the end of each full calendar month of service. Absence without
pay for more than five days (equivalent to 40 hours) in a month result in the loss of accrual for that
month. You accrue vacation leave at the following rate for each completed month of service (prorated,
if part-time):
•
•
•
0-5 years of service: 1 day per month (12 days per year).
5-20 years: 1-1/4 days per month (15 days per year).
20 or more years: 1-2/3 days per month (20 days per year)
As a manager and confidential employees excluded from collective bargaining, you accrue vacation
leave at the rate of 1-1/4 days per completed month of service or 15 days per year. After completing 10
years of service, on January 1 of each subsequent year you will receive the following number of days in
addition to the normal accrual:
•
•
•
•
•
11 years of service: 1 additional day
12 years: 2 additional days
13 years: 3 additional days
14 years: 4 additional days
15 or more years: 5 additional days
Balances. Payroll will record your vacation leave balance in hours and minutes. Without agency
permission, you cannot carry more than 10 days of accrued vacation leave from one year to the next if
you are a nonexempt employee. If you are a nonexempt employee, refer to your bargaining union
contract regarding your maximum accrual. If you are a nonexempt employee or a manager, you may
accumulate as many as 120 days of vacation time. When separated from state service, if a permanent
employee, you will receive a lump-sum payment for your vacation leave balance.
Personal Leave
As a full-time employee who has attained permanent status, you are credited with three days of personal
leave to conduct private affairs, including the observance of religious holidays. On January 1 of each
year thereafter, three days of personal leave will be credited to your leave balance. You must request
authorization in advance from your supervisor to use personal leave. Personal leave must be used prior
to the end of the calendar year or it will be forfeited. You are responsible for monitoring your time
charges to ensure that your personal leave is used within the calendar year.
Part-time employees generally are entitled to prorated personal leave; consult your union contract for
the specifics. Payroll will maintain your balance.
Jury Duty
If you are summoned for jury duty, you will not lose your regular salary or benefits. You must notify
your supervisor immediately and supply the jury notice; your supervisor will forward it along with the
reason for your absence to the Payroll Unit. The court will supply you with verification of your
attendance; which is then submitted through your supervisor to Payroll. You must return to work
23
whenever not actively serving on jury duty. With the exception of travel allowances, you must return
the money received for jury duty to Payroll.
Military Leave
If you are a member of the National Guard or a reserve component of the U.S. armed forces and a
permanent employee, you may apply for leave to attend required training. To verify the leave, you must
submit a copy of your military orders to [email protected] or fax to 860-622-4928.
The state permits as many as three weeks in a calendar year for field training. Paid leave for military
call-ups other than annual training is limited to unscheduled emergencies, subject to the provisions of
your union contract. Notify your supervisor as soon as you become aware of your military leave
schedule.
24
Leave Without Pay
Leave of Absence Without Pay (LAW)
Depending on the terms of your union contract, you may be granted a LAW without endangering your
status as a state employee. Your benefits, however, may be affected. You will not accrue vacation or
sick leave in any month on a LAW for more than five working days (hourly equivalent of) without pay,
and service credit toward retirement, seniority and longevity may be suspended. If you are on a LAW
for pregnancy, illness, injury, or an FMLA-qualifying reason, the state will continue to pay the same
portion of your health insurance as while you were working. You will, however, be billed directly for
the amount that you previously paid through payroll deduction. If on a LAW for another reason, you
will be billed for the full cost of medical coverage.
If possible, submit your LAW request to [email protected] in advance and in
writing with appropriate documentation. Your manager may grant a LAW for as many as five
consecutive days. A LAW of longer than five days must be authorized by the Benefits and Leaves Pod
before the leave, except in extraordinary situations such as emergency medical leave. You may be
granted a LAW for a variety of purposes on a position-held or not-held basis. Your LAW must be
consistent with the requirements in your union contract or state regulations if you are an exempt
employee. If your position is held, you may resume employment on the expiration of the LAW. You
must be cleared by a physician to return to normal duties if you are on a medical LAW. This needs to
be done before you return to work. If your position is not held, your return to active service depends on
the availability of a position. The agency will consider the reason for your request, your work record
and agency operating needs when deciding whether to grant you a LAW and to hold your position.
Maternity Leave
If pregnant, you must use accrued sick leave to cover time before, during or after your delivery when a
physician certifies you as “unable to perform the requirements of your job.” You must send a Medical
Certificate - P33A to [email protected] to substantiate your disability. When your
disability period ends or you have exhausted your sick leave balance prior to the end of your disability
period, you may request to use accrued vacation and personal leave. When all your paid leave has been
used, you may request a LAW with your position held. Refer to your union contract and the FMLA
Policy for further information.
Medical Leave
You must use accrued sick leave to cover the time which you are unable to work because of illness. If
that period extends beyond five days, you will need to supply a Medical Certificate - P33A to
[email protected] to substantiate your use of sick time to. When your sick leave
balance is exhausted, you must apply vacation or personal leave to cover your absence unless FMLA
rules dictate otherwise. Your union contract may contain provisions for advance of sick leave, a sick
leave bank, and donation of leave time in cases of prolonged illness. You may also request a leave of
absence without pay. Details on the requirements and provisions of such leaves are in your union
contract and the FMLA policy.
25
Family Leave
You may request a LAW for the birth or adoption of a child; the serious illness or health condition of a
child, spouse or parent; your own serious health condition; the placement of a foster child in your care
and certain other conditions.
A medical certificate must be submitted by email to
[email protected] to substantiate a request for leave under the Family and Medical
Leave Act (FMLA).
You must request forms by sending an email to
[email protected].
26
SALARY
Payment
Your job classification determines your salary grade. Classifications are assigned to a salary group based
on the amount and type of required experience and training, technical complexity, difficulty and level of
responsibility. The state establishes a number of steps for salary groups other than managerial and
confidential classes. As a new employee, you will generally start at the salary range minimum for your
job classification.
Payday
The state issues salary payments bi-weekly through a checkless system called e-pay. You will receive
payment for the work you performed during the previous two weeks. The delay allows for processing.
If you are a new employee, you should receive your first salary payment four weeks after your first
workday. If you separate from state service, you will receive your last salary payment two weeks
following the end of the last pay period worked. Earnings, itemized deductions and leave accruals are
viewable online. Questions should be directed to Payroll.
Annual Increments
Annual increments are based on the terms of your union contract. You may be raised to the next higher
step in a salary group on your anniversary date. Consult your union contract for details. If an appointed
official or manager, you may be awarded an increase by the governor, usually effective on January 1.
The amount of the increase will be based on your goal attainment and performance under PARS, the
Performance Appraisal and Recognition System for managers.
Collective Bargaining & Cost-of-Living Increases
If you are a union member, your increase will result from the collective bargaining process. An increase
generally will be calculated as an across-the-board percentage within a negotiated salary structure and
payable in July. If you are an appointed official or a manager, the governor may award you a cost-ofliving increase, usually a percentage of your annual salary, also payable in July. When promoted, you
will normally receive a salary increase of at least one full step in the salary group, unless you are placed
at the maximum step. If promoted to a managerial position, you will receive an increase of five percent
or the minimum of the new salary range, whichever is greater.
Longevity Pay
Employees hired on or after July 1, 2011, shall not be entitled to a longevity payment however, any
individual hired on or after said date who shall have military service which would count toward longevity
under current rules shall be entitled to longevity if they obtain the requisite service in the future.
Employees hired prior to July 1, 2011, are eligible for longevity. For those eligible employees, when
you complete the equivalent of 10 years of full-time state service (generally continuous) you will receive
a longevity payment. The amount of longevity payment increases when you complete 15, 20, and 25
year years of service. Longevity schedules appear in your union contract and other pay plans. To qualify,
you must attain the required years of service by April1 or October 1. Longevity payment are also paid
27
in these months. Employees not included in any collective bargaining unit are no longer eligible for
longevity payments.
28
DEDUCTIONS
Federal Income Tax & Social Security Tax
Federal income and Social Security taxes will be deducted from your paycheck in accordance with
federal law.
Connecticut Income Tax
State income tax will be deducted from your paycheck in accordance with state law.
Health Insurance
Health insurance coverage for eligible employees who choose to enroll in the state’s health benefit plan
will be effective the first of the month immediately following the employee’s hire date or date of
eligibility. For example, if you were hired on November 9, you must submit your application within
thirty days; your effective date of coverage would be December 1.
You may extend health and dental coverage to cover your spouse, dependent children under age 26,
and/or disabled children over age 26. Please contact Payroll for enrollment eligibility. Refer to the
Office of State Comptroller’s website for a summary of health insurance options and rates.
You must remain with your insurance carrier until the next open enrollment period, the one time a year
when you can change carriers. You may add a dependent newborn or spouse within one month of the
birth or marriage (please note if adding a new spouse, a marriage certificate is required); other dependent
changes generally are restricted to the open enrollment period. If your spouse’s insurance was terminated
through his/her employer, you may be eligible to add them as a special exception. A letter from the
employer stating insurance has been cancelled will be required. All additions, deletions, or other changes
must be processed through the Payroll Unit.
You must provide documentation of each dependent’s eligibility status at the time of enrollment. It is
your responsibility to notify the Payroll Unit when any dependent is no longer eligible for coverage.
Group Life Insurance
You may purchase term life insurance at group rates. The state pays a portion of this coverage. You
may authorize payroll deductions for this insurance after six months of employment. If you waive
coverage and later decide to enroll, you must apply with medical evidence of insurability and wait for
approval. The amount of life insurance coverage is based on your annual salary and is automatically
adjusted on April 1 and October 1 as your salary increases. Contact the Payroll Unit to obtain forms or
arrange for beneficiary changes. You may visit the Office of State Comptroller’s website
(https://carecompass.ct.gov/supplementalbenefits/) for more information.
Supplemental Benefits
The state offers various supplemental benefits to qualified employees and retirees, which are designed
to complement the benefits provided by the state. These benefits are on a voluntary basis and are paid
entirely by the employee through the convenience of payroll deduction. Available supplemental benefits
29
are listed on the OSC website Supplemental Benefits - Care Compass (ct.gov). Contact the authorized
vendors for information and assistance with the enrollment process.
Direct Deposit
You may deposit your paycheck in a checking or savings account in a financial institution that is a
member of the automated clearinghouse. Your funds will be electronically transmitted and available to
you after 9:00 a.m. on the date of the check. You must complete an authorization form to adjust or
cancel direct deposit. Authorization forms can be obtained from Payroll.
Deferred Compensation
Permanent employees who work more than 20 hours a week are eligible for the state’s deferred
compensation plan. Through payroll deduction, you may set aside a portion of your taxable wages (prior
to tax deferrals). The minimum contribution is $20 per pay period. Obtain details by contacting the
plan administrator.
State Employees Campaign
Through the state employee campaign, you may contribute to your choice of a range of service
organizations via payroll deduction.
Union Dues
As a member of a collective bargaining unit, you may elect to join the union and have union dues
deducted from your check. Your union determines the amount by using a set-rate or sliding-scale
formula based on the amount of your salary.
Credit Unions
As an agency employee, you may join the, CT Labor Department Federal Credit Union, 200 Folly Brook
Blvd., Wethersfield, CT 06109 (telephone 860-263-6500).
As a State of Connecticut employee, you may also join the CT State Employees Credit Union. Offices
are as follow:
84 Wadsworth Street
Hartford, CT 06106
860-522-5388
1244 Storrs Road
Storrs, CT 06268
860-429-9306
2434 Berlin Turnpike
Newington, CT 06111
860-667-7668
401 West Thames Street Southbury Training School
Norwich, CT 06360
Southbury, CT 06488
860-889-7378
203-267-7610
1666 Litchfield Turnpike
Woodbridge, CT 06525
203-397-2949
Silver & Holmes Street
Middletown, CT 06457
860-347-0479
30
Retirement Tiers
The state and collective bargaining units negotiate the pension agreement. The retirement system
includes five plans: Tier I, II, IIA, III and IV. For details, contact Office of the State Comptroller’s at
[email protected] or consult the specific retirement booklet for which you are a member. Online copies
are available at the OSC website Retiree Resources (ct.gov).
Tier I. Usually, you are member of this retirement plan if you were hired on or before July 1, 1984 and
contribute by payroll deduction to your pension. You may retire at age 55 with 25 years of service, or
at age 65 with 10 years of service, or retire early at age 55 with 10 years of service – at a reduced rate.
This tier is divided into three plans. Members of Plans A and C contribute five percent of salary toward
retirement. Members of Plan A have chosen not to participate in the Social Security plan; Plan C
members pay Social Security taxes and are eligible for Social Security benefits. Plan B members
contribute two percent of salary toward retirement until they reach the Social Security maximum, and
five percent of salary above the maximum; they will receive reduced pensions when Social Security
payments begin. You also may purchase periods of service for which you have not made contributions:
war service, prior state service, and leaves of absence for medical reasons.
Tier II. If you were hired into state service from July 2, 1984 to June 30, 1997, you are automatically
covered under this noncontributing plan. If you were employed by the state on or before July 1, 1984,
and were not a member of any other state retirement plan, the Tier II plan also covers you. You
contribute two percent of your salary towards retirement. You are eligible for normal retirement benefits
after you attain: (1) age 60 with at least 25 years of vesting service; (2) age 62 with at least 10, but less
than 25 years of vesting service; or (3) age 62 with at least five years of actual state service. If you have
at least 10 years of service, you can receive retirement benefits – at a reduced rate – if you retire on the
first day of any month following your 55th birthday. Retirements on or after July 1, 2022 are subject to
the age and years of service specified in the SEBAC 2011 agreement.
Tier IIA. If you entered state service from July 1, 1997 to June 30, 2011, you are covered under this
plan as of the date of your employment. You contribute two percent of your salary towards retirement
have the same options and benefits as a Tier II employee. If you are not eligible for any retirement
benefits when you leave state service, you may withdraw your retirement contributions. You also may
purchase periods of service for which you have not made contributions: war service and leaves of
absence for medical reasons.
Tier III. This plan covers employees hired on or after July 1, 2011 to July 30, 2017. As a Tier III
member, you contribute two percent of your total annual salary. Your normal retirement date is the first
of any month on or after you reach age 63 if you have at least 25 years of service, or age 65 with at least
10, but less than 25 years of service. If you have 10 years of vesting service, you can receive early
retirement benefits on the first of any month following your 58th birthday. If you are not eligible for any
retirement benefits when you leave state service, you may withdraw your retirement contributions.
31
Tier IV. This plan covers employees hired on or after July 31, 2017. The Tier IV retirement plan
provides elements of both a defined benefit and defined contribution plan. Defined Benefits –
Participants that satisfy the minimum eligibility criteria will qualify for a pre-defined monthly retirement
income for life, with the amount being determined by years of service, retirement age and Final Average
Earnings. You contribute 7% of your annual salary (this rate is for fiscal year July 2023 through June
2024). Defined Contribution – You contribute 1% to a defined contribution plan with a 1% employer
match. This plan also has a risk sharing component wherein for any given year the employee
contribution can be up to 2% higher depending on the plan’s performance for the previous year. This
contribution will be computed by the plan’s actuaries. (You may also contribute to a 457 plan). For
additional information please see the State Comptroller’s Retirement Resources website . Please note:
If you were a former state employee who contributed to a different state retirement plan, please contact
Payroll 860-263-6195 or [email protected] to see if you qualify to be placed into a different retirement
plan.
32
Separation
Resignation
The personnel regulation on resignation reads: “An employee in the classified service who wishes to
voluntarily separate from state service in good standing shall give the appointing authority at least two
working weeks written notice of resignation, except that the appointing authority may require as much
notification as four weeks if the employee occupies a professional or supervisory position.”
If you resign, your written notice must include your last day of work and be submitted to your supervisor
at least two weeks before you leave. You will receive a lump-sum payment for unused vacation time if
you are a permanent employee. You may arrange to continue your health insurance benefits at the
COBRA rate for a specific period of time. Contact Payroll for details on the length of coverage and
payment amount. If you are not eligible for any retirement benefits when you leave state service, you
may withdraw your retirement contributions. If you do not return to state service within five years and
have not withdrawn your contributions, the Retirement Division will send you a refund application.
After you complete the form and return it, you will receive your contributions plus interest. If the
Retirement Division cannot locate you within 10 years after your employment ends, your contributions
will become part of the retirement fund.
If you submit your resignation less than two weeks before leaving, your separation may be regarded as
not in good standing and may affect your re-employment rights. An unauthorized absence of five or
more working days also will be considered as a resignation not in good standing. You will be notified
if your resignation is considered as not in good standing and you may file an appeal with the
Commissioner of the Department of Administrative Services.
Layoff
The state defines a layoff as an involuntary, non-disciplinary separation from state service resulting from
a lack of work, program cutback or other economic necessity. Consult your union contract for
particulars. If you are an exempt employee, consult Sec. 5-241 of the Connecticut General Statutes.
Reemployment Rights
In an effort to deliver services in a contemporary and cost effective fashion, the State of Connecticut
uses a module called Freenames through the Online Employment Center (JobAps) as a platform for
processing the following:
•
•
•
Mandatory rights for eligible individuals (reemployment/SEBAC/other mandatory rights)
Statewide Transfer requests (non-mandatory transfers)
Rescind of Resignation or Retirement requests
This section applies to:
Current or former State Employees who have been affected by the following:
•
•
•
Layoffs
Noticed for layoff
Accepted a demotion in lieu of layoff
33
•
•
•
•
•
•
Notified of eligibility for mandatory rights
Recently failed a working test period and has permanent classified status
Exercising rights to return to the classified service from the unclassified service
Recently separated NP-2 employee with Article 39 Rights
Current employees who request to place their names on a Statewide Transfer list
Former employees who request to rescind their resignation in good standing or voluntary
retirement.
If you retire from state service, you are eligible for temporary employment in any class in which you
had permanent status. As a re-employed retiree, you may work as many as 120 days per calendar year
(bases on 40 hours per week prior to retirement) without adversely affecting your pension. Such
appointments are totally at the discretion of the agency.
Rescind of Resignation or Retirement
If you have permanent status and resign in good standing, you may, within one year of the date of your
separation, request to rescind your resignation by completing the Rescind Resignation request via the
JobAps, Freenames Application within one year from date of resignation. This will enable you to be
considered for any classes in which you had permanent status. Reinstatement is strictly voluntary on the
part of the Agency and may occur at any time up to two years from the date of your separation.
Former employees shall be fully independent in and responsible for conducting their own search for
reinstatement by requesting rescind privileges via the JobAps, Freenames Application.
Use the rescind of resignation or retirement option to request to rescind a resignation in good standing,
or a retirement from state service in accordance with DAS General Letter 177.
Note: There are no reemployment rights associated with a rescind of resignation. The State of
Connecticut is not required to rehire individuals who rescind resignation. Rather, certain privileges may
be granted depending on the job class and effective date of rehire.
Requirements
A former State employee must meet the following conditions:
• Attained permanent status as a State employee
• Separated from state service in good standing from a position in the Classified service or a
bargaining unit position in the Unclassified service
• You must know the job class you resigned or retired from. To locate this information, contact
your former Human Resources Representative or refer to your last paycheck as an active
employee.
• You must include each job code matching your last held title including different hourly
equivalent. For example:
7603EU= Information Technology Analyst 1 (35 hours)
7603FD= Information Technology Analyst 1 (40 hours)
DAS will conduct a review and approve or deny all rescind requests for any or all job classes identified.
Applicants will be notified of the status of their requests via email. Please be sure to keep your contact
information updated and check your email and spam folders often as most communication will occur
via email.
34
For detailed instructions to request to rescind a resignation in good standing or retirement, refer
to Instructions Rescind Resignation or Retirement.
Exit Interview
Below you will find the link and QR code to access a confidential exit interview survey. Thank you for
taking the time to engage in the exit interview process. This survey will only take approximately three
minutes to complete. The information collected will help us evaluate factors like pay, benefits, work
environment, and your overall work experience. All your answers are confidential, so please be candid
with your responses. The information collected will help us to identify any potential areas where we can
implement new strategies to increase the satisfaction of our workforce. Thank you again for your time
and atention.
Link to survey: Confidential Exit Survey State of Connecticut - DAS (office.com)
QR code:
35
Retirement
Retirement Types
State employees are members of one of several retirement programs. Once an employee has completed
the required actual or vesting service required by the retirement system, he/she is eligible for a pension.
Retirements are effective on the first of the month following the last working day of the previous month.
For retirement purposes, an employee who is on prolonged sick leave will retire the first of the month
following the last working day that sick leave was used in the previous month (a medical certificate is
required) and may qualify for a disability retirement. Types of retirement include, Normal, Early,
Hazardous Duty or Disability. If you plan to retire you must send your Notice of Intent to Retire and
Retirement
Information
Form
via
fax
to
860-622-4928
or
via
email
to
[email protected]. Please refer to the Plan Summary which can be found on the
Office of the State Comptroller’s website at Retiree Resources (ct.gov).
Regardless of the type of separation from service; on the last day of work, the terminating employee
must return State property to her or his supervisor.
Pension Payment Options
Option A - 50% Spouse: This option will pay you a reduced benefit for your lifetime in exchange for
the protection that, should you pre-decease your spouse, the state will continue to pay 50% of your
reduced benefit for your spouse's lifetime.
Option B - 50% or 100% Contingent Annuitant: This option provides you a reduced monthly benefit
for your life and allows you to guarantee lifetime payments after your death to a selected beneficiary.
After your death, a percentage of your reduced benefit, either 50% or 100%, whichever you choose, will
continue for your beneficiary’s life.
Option C - 10 Year or 20 Year Period Certain: This option provides you a reduced monthly benefit
for your lifetime in exchange for the guarantee that monthly benefits will be paid for at least 10 or 20
years from your retirement date (whichever you choose).
Option D - Straight Life Annuity: This option pays you the maximum monthly benefit for your
lifetime only. All benefits will end upon your death, including state-sponsored health insurance for any
surviving eligible dependents.
Insurance Benefits
You must meet age and minimum service requirements to be eligible for retiree health coverage. Service
requirements vary. For more about eligibility for retiree health benefits, contact the Retiree Health
Insurance Unit at 860-702-3533.
Regardless of the retirement option you choose, you will receive a monthly pension for the rest of your
life, and, if you qualify for health insurance benefits, coverage will extend to your eligible dependents.
Once you or your dependents become eligible for Medicare, this is your primary medical plan provider
and the state plan is supplementary.
36
If you retire with at least 25 years of service and have state-sponsored life insurance, the state will pay
for 50 percent of the amount of coverage (at least $7,500) as when employed. If you retire with less than
25 years of service, the state will pay a prorated amount. The Group Life Insurance Section of the
Retirement Division will contact you following your retirement concerning conversion options.
Disability retirement and pre-retirement death benefits are a part of your pension agreement. Pensions
also are subject to cost-of-living increases as outlined in the agreement.
For further information regarding retirement benefits call or email:
Office of the State Comptroller Retirement Division
165 Capitol Avenue
Hartford, CT 06106
Telephone: (860) 702-3490
Email: [email protected]
37
TRAINING & DEVELOPMENT
In-Service Training
You may apply for Department of Administrative Services in-service training courses. Courses should
be relevant to your position or career mobility, or to your unit’s operational needs. They are generally
held during regular work hours in the spring and fall. Supervisor approval is required. For information,
contact Employee and Organizational Development.
Management Development Courses
A calendar of courses focusing on leadership, supervisory and management development, strategic
planning, customer service skills and total quality management techniques is distributed twice a year.
Contact Employee and Organizational Development for particulars.
Tuition Reimbursement
You may seek tuition reimbursement from the state for courses taken during non-working hours at
colleges, universities, technical schools or other accredited educational institutions. You do not need
supervisory approval. Eligibility and funding provisions are outlined in your union contract if you are a
bargaining unit employee.
As a non-exempt employee, you may be reimbursed for a non-credited course through your union.
Convert course hours to credits. For example, 6-14 hours equal one credit for tuition reimbursement;
15-29 hours, two credits; and 30-44, three credits.
As a manager, you are eligible for tuition reimbursement from the State Management Advisory Council
or agency funds.
As a non-managerial confidential employee, you may apply for reimbursement in accordance with the
union contract that would have included your job classification had your class not been excluded. For a
fall semester class, you must document by Feb. 1 that you paid for a course and passed it, and by June 1
for a spring semester class.
Forms and assistance are available through Employee and Organizational Development. You must
submit your application to that unit at Central Office, 200 Folly Brook Blvd., Wethersfield, CT 061091114, at least two weeks before the start of a class.
Conferences, Workshops & Seminars
Your union contract may pay costs associated with conferences, workshops or seminars such as
registration fees, travel expenses and meals. You must receive supervisory approval before processing
a payment request. Consult you union contract for details.
38
EMPLOYMENT POLICIES
(Ctrl + Click to follow links below)
Acceptable Use of State Systems Policy - Statewide (2019)
ADA Reasonable Accommodation Policy
Affirmative Action Policy Statement – DOL (2023)
AIDS Policy – DOL (7/16/2012)
Background Check Policy and Procedures – DOL (10/31/2022)
Disposition of Public Records Policy – DOL (11/28/2011)
Discrimination and Illegal Harassment Prevention Policy – DOL (April 2023)
Drug Free Workplace State Policy – DOL (7/16/2012)
Employee Conduct Policy – DOL (8/3/2018)
Employee Dependability Policy – DOL (7/16/2012)
Employee Discipline Policy – DOL (7/16/2012)
Ethical Conduct Policy – DOL (8/2013)
Family Violence Leave Policy – Statewide GL 34 (1/2022)
Federal Family & Medical Leave Act – DOL (7/16/2012)
Health and Safety Policy – DOL (7/16/2012)
Internal Discrimination Complaint Procedure – DOL (4/18/2023)
Internal Security Standards - DOL
Office Automation Policy, Standards and Guidelines – DOL (7/16/2012)
Personal Wireless Device Policy (Rev. 9/9/2020)
Phone Use Policy (Rev. 4/23/2023)
Policy for DOL Facility Occupancy – DOL (7/9/2020)
Professional Image Policy – DOL (3/1/2023)
Prohibition of Weapons in DOL Worksites Policy – DOL (8/10/16)
Public Officials and State Employees Guide to the Code of Ethics - Statewide 07/16/2012
Software Anti-Piracy Policy – DOL (7/16/2012)
Vehicle-Use-for-State-Business-Policy--DAS-General-Letter-115--April-1-2012.pdf (ct.gov)
Violence in the Workplace Prevention – DOL (4/2012)
Workers Compensation Rights Responsibilities and Claims (ct.gov)
Workplace Incident Report and Footprints Instructions – DOL (2015)
**Please refer to online Employee Handbook for link activation.
39
|
<TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
[user request]
<TEXT>
[context document] | My daughter was recently diagnosed with hyperhidrosis. What are the signs and symptoms of this? I'm also looking to know what causes this, including any underlying medical conditions. Are there any medications that can help with the excessive sweating? Her doctor wants to run some tests and I was wondering what some of those might be. Is this a common ailment? | Hyperhidrosis is excessive sweating. This condition happens when you sweat more than what your body needs to regulate your temperature. You may experience sweating when you’re at rest, in cold temperatures or randomly at times when you wouldn’t expect to sweat.
There are two types of hyperhidrosis:
Primary focal hyperhidrosis: Focal hyperhidrosis is a chronic skin condition. A genetic change (mutation) causes this condition. You can inherit it from your biological family. This is the most common type of hyperhidrosis. It usually affects your armpits, hands, feet and face. It tends to start before age 25.
Secondary generalized hyperhidrosis: Generalized hyperhidrosis is excessive sweating caused by an underlying medical condition or it’s a side effect of a medication. Some examples include diabetes and Parkinson’s disease and medications, such as naproxen (Aleve®). Generalized hyperhidrosis may cause you to sweat while sleeping.
How common is hyperhidrosis?
Hyperhidrosis is common. Research suggests that an estimated 3% of adults in the United States between ages 20 and 60 have hyperhidrosis.
What are the symptoms of hyperhidrosis?
The main symptom of hyperhidrosis is sweating. When you sweat, you may feel:
Wetness on your skin.
Damp clothing.
Beads of fluid dripping from your cheeks or forehead.
Over time, hyperhidrosis can lead to the following symptoms:
Itching and inflammation when sweat irritates your skin.
Body odor, which occurs when bacteria on your skin mixes with sweat particles.
Cracked or peeling skin on your feet.
Hyperhidrosis symptoms can range in severity. You may have minor symptoms that come and go or you may have constant symptoms that have an impact on your day-to-day activities.
Sweat comes from eccrine glands, which exist in the skin throughout your body. You have the most eccrine glands in your:
Armpits or underarms (axillary hyperhidrosis).
Soles of your feet (plantar hyperhidrosis).
Palms of your hands (palmar hyperhidrosis).
Forehead and cheeks (craniofacial hyperhidrosis).
Genitals.
Lower back.
The most common location on your body to experience excessive sweating is the palms of your hands.
What causes hyperhidrosis?
Overactive sweat glands cause hyperhidrosis. Your eccrine glands (sweat glands) create sweat to cool down your body when you get hot. This process activates when you exercise or if you’re nervous. If you have hyperhidrosis, your eccrine glands activate and produce sweat more often than when your body is too hot. You may experience sweating at random times of the day when there isn’t something like an activity or emotion causing your glands to produce sweat. Research is ongoing to learn more about why your glands make too much sweat.
Your body produces sweat to cool it down and prevent overheating. There may be certain triggers in your environment that can cause your sweat glands to produce more sweat including:
Certain emotions like stress, anxiety, fear or nervousness.
Warm temperatures or humidity.
Exercise or physical activity.
Certain foods and beverages, like spicy foods, fatty foods, sugary and salty foods, and foods with high levels of protein. Beverage examples include caffeinated beverages (coffee) and alcohol.
Medications that cause sweating
Certain medications can cause sweating as a side effect, including but not limited to:
Albuterol (Proventil®).
Bupropion (Wellbutrin®).
Hydrocodone.
Insulin (Humulin® R, Novolin® R).
Levothyroxine.
Lisinopril.
Naproxen (Aleve®).
Omeprazole (Prilosec®).
Sertraline (Zoloft®).
Hyperhidrosis (generalized) could be a sign of an underlying medical condition including but not limited to:
Acromegaly.
An infection (tuberculosis).
Anxiety.
Cancer.
Diabetes.
Heart disease or heart failure.
Hyperthyroidism.
Menopause.
Obesity.
Parkinson’s disease.
Does hyperhidrosis run in families?
Yes, you may be more at risk of hyperhidrosis, specifically focal hyperhidrosis, if someone in your biological family has the condition. Research indicates that a hereditary genetic mutation or change to your DNA could cause hyperhidrosis.
Hyperhidrosis can cause complications that include:
A skin infection.
Skin changes, such as paleness, discoloration, cracks or wrinkles.
Maceration, or unusually soft, moist skin.
Hyperhidrosis can also impact your mental health. You may find yourself changing your routine to hide your symptoms from others. Constant sweating may be so severe that you avoid common actions, such as lifting your arms or shaking hands. You may even give up activities you enjoy to avoid problems or embarrassment from excessive sweating. Contact a healthcare provider if hyperhidrosis affects your mental health and social well-being.
Diagnosis and Tests
How is hyperhidrosis diagnosed?
A healthcare provider will diagnose hyperhidrosis after a physical exam and learning more about your symptoms and medical history. They’ll evaluate your symptoms using diagnostic criteria. If you experienced excessive sweating for at least six months and answered yes to at least two of the following questions, it may lead to a hyperhidrosis diagnosis:
Sweating occurs on your underarms, palms, soles or face.
You sweat the same on both sides of your body.
You don’t sweat at night or sweat less at night.
An episode of sweating lasts for at least one week.
You have a history of hyperhidrosis in your biological family.
Sweating interferes with your ability to do certain activities.
You’re younger than 25 years old.
What tests diagnose hyperhidrosis?
A healthcare provider may use one of the following tests to determine the cause of hyperhidrosis:
Starch-iodine test: Your provider applies an iodine solution to the sweaty area and sprinkles starch over the iodine solution. In places where you have excess sweating, the solution turns dark blue.
Paper test: Your provider places special paper on the affected area to absorb sweat. Later, your provider weighs the paper to determine how much sweat you have.
Blood or imaging tests: These tests can take a sample of your blood or take pictures underneath your skin to help your healthcare provider learn more about what causes your symptoms.
Management and Treatment
You can manage your symptoms of hyperhidrosis at home by:
Using antiperspirants and deodorants. Antiperspirants work by sealing up sweat glands so your body stops producing sweat. A provider may recommend certain over-the-counter (OTC) or prescription-strength varieties. The best deodorant for hyperhidrosis is an aluminum-based product.
Showering or bathing more often. Changing your routine, such as showering more often, may improve mild symptoms.
Wearing breathable clothing. Choosing clothing that’s breathable and more absorbent, like cotton, can help you feel more comfortable if you sweat. Avoid blended fabrics and polyester, which can trap heat and make you sweat more.
Hyperhidrosis medications
A healthcare provider or dermatologist may recommend taking medications to reduce your symptoms of hyperhidrosis, including:
Anticholinergic agents (glycopyrrolate and oxybutynin).
Antidepressants.
Beta-blockers.
Medicated wipes (glycopyrronium tosylate, Qbrexza®).
Aluminum chloride gel.
Hyperhidrosis treatment therapies
If your symptoms don’t improve with at-home treatment and/or medications, your healthcare provider may recommend more specialized therapies:
Iontophoresis: You place your hands or feet in a shallow tub of tap water. A special device emits a low electrical current through the water, blocking sweat glands over time. Each treatment takes 10 to 20 minutes. You may need repeated treatments. Insurance may cover the equipment, enabling you to do the therapy at home.
Botulinum toxin (Botox®) injections: Injecting botulinum toxin into an overactive nerve can stop sweat production for months at a time. Repeat treatments are necessary.
Microwave therapy: Your provider places a high-tech device (miraDry®) against the affected area of your skin. The device emits thermal energy (heat), which destroys sweat glands permanently. This is an hour-long procedure that takes place in a healthcare provider’s office.
Hyperhidrosis surgery
Endoscopic thoracic sympathectomy: ETS is a minimally invasive type of surgery where a surgeon will cut a nerve in your body to turn off the signal that tells your glands to produce sweat.
Sweat gland removal: A surgeon will remove your sweat glands using a laser, scraping (curettage), cutting (excision) or liposuction.
Surgery has the potential to offer lasting benefits for persistent sweating that doesn’t respond to other therapies. | <TASK DESCRIPTION>
Only use the provided text to answer the question, no outside sources.
<QUESTION>
My daughter was recently diagnosed with hyperhidrosis. What are the signs and symptoms of this? I'm also looking to know what causes this, including any underlying medical conditions. Are there any medications that can help with the excessive sweating? Her doctor wants to run some tests and I was wondering what some of those might be. Is this a common ailment?
<TEXT>
Hyperhidrosis is excessive sweating. This condition happens when you sweat more than what your body needs to regulate your temperature. You may experience sweating when you’re at rest, in cold temperatures or randomly at times when you wouldn’t expect to sweat.
There are two types of hyperhidrosis:
Primary focal hyperhidrosis: Focal hyperhidrosis is a chronic skin condition. A genetic change (mutation) causes this condition. You can inherit it from your biological family. This is the most common type of hyperhidrosis. It usually affects your armpits, hands, feet and face. It tends to start before age 25.
Secondary generalized hyperhidrosis: Generalized hyperhidrosis is excessive sweating caused by an underlying medical condition or it’s a side effect of a medication. Some examples include diabetes and Parkinson’s disease and medications, such as naproxen (Aleve®). Generalized hyperhidrosis may cause you to sweat while sleeping.
How common is hyperhidrosis?
Hyperhidrosis is common. Research suggests that an estimated 3% of adults in the United States between ages 20 and 60 have hyperhidrosis.
What are the symptoms of hyperhidrosis?
The main symptom of hyperhidrosis is sweating. When you sweat, you may feel:
Wetness on your skin.
Damp clothing.
Beads of fluid dripping from your cheeks or forehead.
Over time, hyperhidrosis can lead to the following symptoms:
Itching and inflammation when sweat irritates your skin.
Body odor, which occurs when bacteria on your skin mixes with sweat particles.
Cracked or peeling skin on your feet.
Hyperhidrosis symptoms can range in severity. You may have minor symptoms that come and go or you may have constant symptoms that have an impact on your day-to-day activities.
Sweat comes from eccrine glands, which exist in the skin throughout your body. You have the most eccrine glands in your:
Armpits or underarms (axillary hyperhidrosis).
Soles of your feet (plantar hyperhidrosis).
Palms of your hands (palmar hyperhidrosis).
Forehead and cheeks (craniofacial hyperhidrosis).
Genitals.
Lower back.
The most common location on your body to experience excessive sweating is the palms of your hands.
What causes hyperhidrosis?
Overactive sweat glands cause hyperhidrosis. Your eccrine glands (sweat glands) create sweat to cool down your body when you get hot. This process activates when you exercise or if you’re nervous. If you have hyperhidrosis, your eccrine glands activate and produce sweat more often than when your body is too hot. You may experience sweating at random times of the day when there isn’t something like an activity or emotion causing your glands to produce sweat. Research is ongoing to learn more about why your glands make too much sweat.
Your body produces sweat to cool it down and prevent overheating. There may be certain triggers in your environment that can cause your sweat glands to produce more sweat including:
Certain emotions like stress, anxiety, fear or nervousness.
Warm temperatures or humidity.
Exercise or physical activity.
Certain foods and beverages, like spicy foods, fatty foods, sugary and salty foods, and foods with high levels of protein. Beverage examples include caffeinated beverages (coffee) and alcohol.
Medications that cause sweating
Certain medications can cause sweating as a side effect, including but not limited to:
Albuterol (Proventil®).
Bupropion (Wellbutrin®).
Hydrocodone.
Insulin (Humulin® R, Novolin® R).
Levothyroxine.
Lisinopril.
Naproxen (Aleve®).
Omeprazole (Prilosec®).
Sertraline (Zoloft®).
Hyperhidrosis (generalized) could be a sign of an underlying medical condition including but not limited to:
Acromegaly.
An infection (tuberculosis).
Anxiety.
Cancer.
Diabetes.
Heart disease or heart failure.
Hyperthyroidism.
Menopause.
Obesity.
Parkinson’s disease.
Does hyperhidrosis run in families?
Yes, you may be more at risk of hyperhidrosis, specifically focal hyperhidrosis, if someone in your biological family has the condition. Research indicates that a hereditary genetic mutation or change to your DNA could cause hyperhidrosis.
Hyperhidrosis can cause complications that include:
A skin infection.
Skin changes, such as paleness, discoloration, cracks or wrinkles.
Maceration, or unusually soft, moist skin.
Hyperhidrosis can also impact your mental health. You may find yourself changing your routine to hide your symptoms from others. Constant sweating may be so severe that you avoid common actions, such as lifting your arms or shaking hands. You may even give up activities you enjoy to avoid problems or embarrassment from excessive sweating. Contact a healthcare provider if hyperhidrosis affects your mental health and social well-being.
Diagnosis and Tests
How is hyperhidrosis diagnosed?
A healthcare provider will diagnose hyperhidrosis after a physical exam and learning more about your symptoms and medical history. They’ll evaluate your symptoms using diagnostic criteria. If you experienced excessive sweating for at least six months and answered yes to at least two of the following questions, it may lead to a hyperhidrosis diagnosis:
Sweating occurs on your underarms, palms, soles or face.
You sweat the same on both sides of your body.
You don’t sweat at night or sweat less at night.
An episode of sweating lasts for at least one week.
You have a history of hyperhidrosis in your biological family.
Sweating interferes with your ability to do certain activities.
You’re younger than 25 years old.
What tests diagnose hyperhidrosis?
A healthcare provider may use one of the following tests to determine the cause of hyperhidrosis:
Starch-iodine test: Your provider applies an iodine solution to the sweaty area and sprinkles starch over the iodine solution. In places where you have excess sweating, the solution turns dark blue.
Paper test: Your provider places special paper on the affected area to absorb sweat. Later, your provider weighs the paper to determine how much sweat you have.
Blood or imaging tests: These tests can take a sample of your blood or take pictures underneath your skin to help your healthcare provider learn more about what causes your symptoms.
Management and Treatment
You can manage your symptoms of hyperhidrosis at home by:
Using antiperspirants and deodorants. Antiperspirants work by sealing up sweat glands so your body stops producing sweat. A provider may recommend certain over-the-counter (OTC) or prescription-strength varieties. The best deodorant for hyperhidrosis is an aluminum-based product.
Showering or bathing more often. Changing your routine, such as showering more often, may improve mild symptoms.
Wearing breathable clothing. Choosing clothing that’s breathable and more absorbent, like cotton, can help you feel more comfortable if you sweat. Avoid blended fabrics and polyester, which can trap heat and make you sweat more.
Hyperhidrosis medications
A healthcare provider or dermatologist may recommend taking medications to reduce your symptoms of hyperhidrosis, including:
Anticholinergic agents (glycopyrrolate and oxybutynin).
Antidepressants.
Beta-blockers.
Medicated wipes (glycopyrronium tosylate, Qbrexza®).
Aluminum chloride gel.
Hyperhidrosis treatment therapies
If your symptoms don’t improve with at-home treatment and/or medications, your healthcare provider may recommend more specialized therapies:
Iontophoresis: You place your hands or feet in a shallow tub of tap water. A special device emits a low electrical current through the water, blocking sweat glands over time. Each treatment takes 10 to 20 minutes. You may need repeated treatments. Insurance may cover the equipment, enabling you to do the therapy at home.
Botulinum toxin (Botox®) injections: Injecting botulinum toxin into an overactive nerve can stop sweat production for months at a time. Repeat treatments are necessary.
Microwave therapy: Your provider places a high-tech device (miraDry®) against the affected area of your skin. The device emits thermal energy (heat), which destroys sweat glands permanently. This is an hour-long procedure that takes place in a healthcare provider’s office.
Hyperhidrosis surgery
Endoscopic thoracic sympathectomy: ETS is a minimally invasive type of surgery where a surgeon will cut a nerve in your body to turn off the signal that tells your glands to produce sweat.
Sweat gland removal: A surgeon will remove your sweat glands using a laser, scraping (curettage), cutting (excision) or liposuction.
Surgery has the potential to offer lasting benefits for persistent sweating that doesn’t respond to other therapies.
https://my.clevelandclinic.org/health/diseases/17113-hyperhidrosis |
Using only this document provide the answer in a single sentence. It should be between 15 to 30 words. | Are there treatments for EDS? | Ehlers-Danlos syndromes
Ehlers-Danlos syndromes (EDS) are a group of rare inherited conditions that affect connective tissue.
Connective tissues provide support in skin, tendons, ligaments, blood vessels, internal organs and bones.
Symptoms of Ehlers-Danlos syndromes (EDS)
There are several types of EDS that may share some symptoms.
These include:
an increased range of joint movement (joint hypermobility)
stretchy skin
fragile skin that breaks or bruises easily
EDS can affect people in different ways. For some, the condition is relatively mild, while for others their symptoms can be disabling.
The different types of EDS are caused by faults in certain genes that make connective tissue weaker.
Depending on the type of EDS, the faulty gene may have been inherited from 1 parent or both parents.
Sometimes the faulty gene is not inherited, but occurs in the person for the first time.
Some of the rare, severe types can be life threatening.
Main types of Ehlers-Danlos syndromes (EDS)
There are 13 types of EDS, most of which are rare.
Hypermobile EDS (hEDS) is the most common type.
Other types of EDS include classical EDS, vascular EDS and kyphoscoliotic EDS.
The EDS Support UK website has more information about the different types of EDS
Hypermobile EDS
People with hEDS may have:
joint hypermobility
loose, unstable joints that dislocate easily
joint pain and clicking joints
extreme tiredness (fatigue)
skin that bruises easily
digestive problems, such as heartburn and constipation
dizziness and an increased heart rate after standing up
problems with internal organs, such as mitral valve problems or organ prolapse
problems with bladder control (urinary incontinence)
Currently, there are no tests to confirm whether someone has hEDS.
The diagnosis is made based on a person's medical history and a physical examination.
Classical EDS
Classical EDS (cEDS) is less common than hypermobile EDS and tends to affect the skin more.
People with cEDS may have:
joint hypermobility
loose, unstable joints that dislocate easily
stretchy skin
fragile skin that can split easily, especially over the forehead, knees, shins and elbows
smooth, velvety skin that bruises easily
wounds that are slow to heal and leave wide scars
hernias and organ prolapse
Vascular EDS
Vascular EDS (vEDS) is a rare type of EDS and is often considered to be the most serious.
It affects the blood vessels and internal organs, which can cause them to split open and lead to life-threatening bleeding.
People with vEDS may have:
skin that bruises very easily
thin skin with visible small blood vessels, particularly on the upper chest and legs
fragile blood vessels that can bulge or tear, resulting in serious internal bleeding
a risk of organ problems, such as the bowel tearing, the womb tearing (in late pregnancy) and partial collapse of the lung
hypermobile fingers and toes, unusual facial features (such as a thin nose and lips, large eyes and small earlobes), varicose veins and delayed wound healing
Kyphoscoliotic EDS
Kyphoscoliotic EDS (kEDS) is rare.
People with kEDS may have:
curvature of the spine – this starts in early childhood and often gets worse in the teenage years
joint hypermobility
loose, unstable joints that dislocate easily
weak muscle tone from childhood (hypotonia) – this may cause a delay in sitting and walking, or difficulty walking if symptoms get worse
fragile eyes that can easily be damaged
soft, velvety skin that is stretchy, bruises easily and scars
Hypermobility spectrum disorder (HSD)
Some people have problems caused by hypermobility, but do not have any of the specific EDS conditions. They may be diagnosed with hypermobility spectrum disorder (HSD), which is treated in the same way as hEDS.
Getting medical advice
See a GP if you have several troublesome symptoms of EDS.
You do not usually need to worry if you only have a few symptoms and they're not causing any problems.
Joint hypermobility, for example, is relatively common, affecting around 1 in 30 people. It's unlikely to be caused by EDS if you do not have any other symptoms.
The GP may refer you to a joint specialist (rheumatologist) if you have problems with your joints and they suspect EDS.
If there's a possibility you may have 1 of the rare types of EDS, the GP can refer you to your local genetics service for an assessment.
The genetics specialist will ask about your medical history, family history, assess your symptoms and may carry out a genetic blood test to confirm the diagnosis.
If further investigation is needed, your hospital doctor can refer you to a specialist EDS diagnostic service based in Sheffield or London – see the Annabelle's Challenge website for more information.
Treatment for Ehlers-Danlos syndromes (EDS)
There's no specific treatment for EDS, but it's possible to manage many of the symptoms with support and advice.
People with EDS may also benefit from support from a number of different healthcare professionals.
For example:
a physiotherapist can teach you exercises to help strengthen your joints, avoid injuries and manage pain
an occupational therapist can help you manage daily activities and give advice on equipment that may help you
counselling and cognitive behavioural therapy (CBT) may be useful if you're struggling to cope with long-term pain
for certain types of EDS, regular scans carried out in hospital can detect problems with internal organs
genetic counselling can help you learn more about the cause of your condition, how it's inherited, and what the chances are of passing it on to your children
Your GP or consultant can refer you to these services.
Information:
Self-refer for treatment
If you have Ehlers-Danlos syndromes, you might be able to refer yourself directly to services for help with your condition without seeing a GP.
To find out if there are any services in your area:
ask the reception staff at your GP surgery
check your GP surgery's website
contact your integrated care board (ICB) – find your local ICB
search online for NHS treatment for Ehlers-Danlos syndromes near you
Living with Ehlers-Danlos syndromes (EDS)
It's important to be careful about activities that put a lot of strain on your joints or put you at risk of injury.
But it's also important not to be overprotective and avoid living an otherwise normal life.
Advice will depend on which type of EDS you have and how it affects you:
you may be advised to avoid some activities entirely, such as heavy lifting and contact sports
for some activities you may need to wear appropriate protection and be taught how to reduce the strain on your joints
lower-risk activities, such as swimming or pilates, may be recommended to help you stay fit and healthy
if fatigue is a problem, you can be taught ways to conserve your energy and pace your activities
How Ehlers-Danlos syndromes (EDS) are inherited
EDS can be inherited, but it happen by chance in someone without a family history of the condition.
The 2 main ways EDS is inherited are:
autosomal dominant inheritance (hypermobile, classical and vascular EDS) – the faulty gene that causes EDS is passed on by 1 parent and there's a 1 in 2 chance of each of their children developing the condition
autosomal recessive inheritance (kyphoscoliotic EDS) – the faulty gene is inherited from both parents and there's a 1 in 4 chance of each of their children developing the condition
A person with EDS can only pass on the same type of EDS to their children.
For example, the children of someone with hypermobile EDS cannot inherit vascular EDS.
The severity of the condition can vary within the same family.
More information
The following websites provide more information, advice and support for people with EDS and their families:
Ehlers-Danlos Support UK – you can also call their free helpline on 0800 907 8518 or find local support groups
Hypermobility Syndromes Association (HMSA) – you can also call their helpline on 0333 011 6388 or find local support groups
Information about you
If you have EDS, your clinical team will pass information about you on to the National Congenital Anomaly and Rare Diseases Registration Service.
This helps scientists look for better ways to prevent and treat this condition.
You can opt out of the register at any time.
Page last reviewed: 04 October 2022
Next review due: 04 October 2025
| question: Are there treatments for EDS?
----------
context: Ehlers-Danlos syndromes
Ehlers-Danlos syndromes (EDS) are a group of rare inherited conditions that affect connective tissue.
Connective tissues provide support in skin, tendons, ligaments, blood vessels, internal organs and bones.
Symptoms of Ehlers-Danlos syndromes (EDS)
There are several types of EDS that may share some symptoms.
These include:
an increased range of joint movement (joint hypermobility)
stretchy skin
fragile skin that breaks or bruises easily
EDS can affect people in different ways. For some, the condition is relatively mild, while for others their symptoms can be disabling.
The different types of EDS are caused by faults in certain genes that make connective tissue weaker.
Depending on the type of EDS, the faulty gene may have been inherited from 1 parent or both parents.
Sometimes the faulty gene is not inherited, but occurs in the person for the first time.
Some of the rare, severe types can be life threatening.
Main types of Ehlers-Danlos syndromes (EDS)
There are 13 types of EDS, most of which are rare.
Hypermobile EDS (hEDS) is the most common type.
Other types of EDS include classical EDS, vascular EDS and kyphoscoliotic EDS.
The EDS Support UK website has more information about the different types of EDS
Hypermobile EDS
People with hEDS may have:
joint hypermobility
loose, unstable joints that dislocate easily
joint pain and clicking joints
extreme tiredness (fatigue)
skin that bruises easily
digestive problems, such as heartburn and constipation
dizziness and an increased heart rate after standing up
problems with internal organs, such as mitral valve problems or organ prolapse
problems with bladder control (urinary incontinence)
Currently, there are no tests to confirm whether someone has hEDS.
The diagnosis is made based on a person's medical history and a physical examination.
Classical EDS
Classical EDS (cEDS) is less common than hypermobile EDS and tends to affect the skin more.
People with cEDS may have:
joint hypermobility
loose, unstable joints that dislocate easily
stretchy skin
fragile skin that can split easily, especially over the forehead, knees, shins and elbows
smooth, velvety skin that bruises easily
wounds that are slow to heal and leave wide scars
hernias and organ prolapse
Vascular EDS
Vascular EDS (vEDS) is a rare type of EDS and is often considered to be the most serious.
It affects the blood vessels and internal organs, which can cause them to split open and lead to life-threatening bleeding.
People with vEDS may have:
skin that bruises very easily
thin skin with visible small blood vessels, particularly on the upper chest and legs
fragile blood vessels that can bulge or tear, resulting in serious internal bleeding
a risk of organ problems, such as the bowel tearing, the womb tearing (in late pregnancy) and partial collapse of the lung
hypermobile fingers and toes, unusual facial features (such as a thin nose and lips, large eyes and small earlobes), varicose veins and delayed wound healing
Kyphoscoliotic EDS
Kyphoscoliotic EDS (kEDS) is rare.
People with kEDS may have:
curvature of the spine – this starts in early childhood and often gets worse in the teenage years
joint hypermobility
loose, unstable joints that dislocate easily
weak muscle tone from childhood (hypotonia) – this may cause a delay in sitting and walking, or difficulty walking if symptoms get worse
fragile eyes that can easily be damaged
soft, velvety skin that is stretchy, bruises easily and scars
Hypermobility spectrum disorder (HSD)
Some people have problems caused by hypermobility, but do not have any of the specific EDS conditions. They may be diagnosed with hypermobility spectrum disorder (HSD), which is treated in the same way as hEDS.
Getting medical advice
See a GP if you have several troublesome symptoms of EDS.
You do not usually need to worry if you only have a few symptoms and they're not causing any problems.
Joint hypermobility, for example, is relatively common, affecting around 1 in 30 people. It's unlikely to be caused by EDS if you do not have any other symptoms.
The GP may refer you to a joint specialist (rheumatologist) if you have problems with your joints and they suspect EDS.
If there's a possibility you may have 1 of the rare types of EDS, the GP can refer you to your local genetics service for an assessment.
The genetics specialist will ask about your medical history, family history, assess your symptoms and may carry out a genetic blood test to confirm the diagnosis.
If further investigation is needed, your hospital doctor can refer you to a specialist EDS diagnostic service based in Sheffield or London – see the Annabelle's Challenge website for more information.
Treatment for Ehlers-Danlos syndromes (EDS)
There's no specific treatment for EDS, but it's possible to manage many of the symptoms with support and advice.
People with EDS may also benefit from support from a number of different healthcare professionals.
For example:
a physiotherapist can teach you exercises to help strengthen your joints, avoid injuries and manage pain
an occupational therapist can help you manage daily activities and give advice on equipment that may help you
counselling and cognitive behavioural therapy (CBT) may be useful if you're struggling to cope with long-term pain
for certain types of EDS, regular scans carried out in hospital can detect problems with internal organs
genetic counselling can help you learn more about the cause of your condition, how it's inherited, and what the chances are of passing it on to your children
Your GP or consultant can refer you to these services.
Information:
Self-refer for treatment
If you have Ehlers-Danlos syndromes, you might be able to refer yourself directly to services for help with your condition without seeing a GP.
To find out if there are any services in your area:
ask the reception staff at your GP surgery
check your GP surgery's website
contact your integrated care board (ICB) – find your local ICB
search online for NHS treatment for Ehlers-Danlos syndromes near you
Living with Ehlers-Danlos syndromes (EDS)
It's important to be careful about activities that put a lot of strain on your joints or put you at risk of injury.
But it's also important not to be overprotective and avoid living an otherwise normal life.
Advice will depend on which type of EDS you have and how it affects you:
you may be advised to avoid some activities entirely, such as heavy lifting and contact sports
for some activities you may need to wear appropriate protection and be taught how to reduce the strain on your joints
lower-risk activities, such as swimming or pilates, may be recommended to help you stay fit and healthy
if fatigue is a problem, you can be taught ways to conserve your energy and pace your activities
How Ehlers-Danlos syndromes (EDS) are inherited
EDS can be inherited, but it happen by chance in someone without a family history of the condition.
The 2 main ways EDS is inherited are:
autosomal dominant inheritance (hypermobile, classical and vascular EDS) – the faulty gene that causes EDS is passed on by 1 parent and there's a 1 in 2 chance of each of their children developing the condition
autosomal recessive inheritance (kyphoscoliotic EDS) – the faulty gene is inherited from both parents and there's a 1 in 4 chance of each of their children developing the condition
A person with EDS can only pass on the same type of EDS to their children.
For example, the children of someone with hypermobile EDS cannot inherit vascular EDS.
The severity of the condition can vary within the same family.
More information
The following websites provide more information, advice and support for people with EDS and their families:
Ehlers-Danlos Support UK – you can also call their free helpline on 0800 907 8518 or find local support groups
Hypermobility Syndromes Association (HMSA) – you can also call their helpline on 0333 011 6388 or find local support groups
Information about you
If you have EDS, your clinical team will pass information about you on to the National Congenital Anomaly and Rare Diseases Registration Service.
This helps scientists look for better ways to prevent and treat this condition.
You can opt out of the register at any time.
Page last reviewed: 04 October 2022
Next review due: 04 October 2025
----------
instructions: Using only this document provide the answer in a single sentence. It should be between 15 to 30 words. |